Skip to content

Speeches, Presentation & Articles

Speeches, Presentations & Articles

"PCPD’s Guidance on the Ethical Development and Use of Artificial Intelligence" -- Privacy Commissioner's article contribution at Hong Kong Lawyer (September 2021)

In the words of Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, “AI holds the potential to deliver enormous benefits to society, but only if it is used responsibly.

Organisations in Hong Kong, both private and public, are poised to increase the adoption of artificial intelligence (“AI”) in their operation. However, AI poses challenges to privacy, the protection of personal data and potentially the rights and interests of individual users. My office, the Office of the Privacy Commissioner for Personal Data, Hong Kong (“PCPD”), has recently published the Guidance on the Ethical Development and Use of Artificial Intelligence (“Guidance”) to facilitate the healthy development and use of AI in Hong Kong, while assisting organisations to comply with the requirements of the Personal Data (Privacy) Ordinance (Cap. 486) (“PDPO”) in the process. 


What is AI? Definitions of AI in different literatures vary. Broadly, AI refers to a family of technologies that involve the use of computer programmes and machines to mimic the problem-solving or decision-making capabilities of human beings. Examples of AI applications include facial recognition, speech recognition, chatbots, data analytics and automated decision-making or recommendation. As AI technologies are still evolving, more new applications may emerge.

Significance and risks of AI

According to a research published in 2020, 78% of Hong Kong businesses believed that AI was beneficial and conducive to quality and efficiency enhancement of their services. Given that data is the lifeblood of AI, and in line with the Outline Development Plan for the Guangdong-Hong Kong-Macao Greater Bay Area, the healthy development and use of AI can help Hong Kong exploit its advantages as a regional data hub, as well as empower Hong Kong to become an innovation and technology hub and a world-class smart city.

That said, the ethical considerations for using AI in decision-making have also come under the spotlight internationally in recent years, especially when there are reports on bias and discrimination associated with the use of AI.

Calls for ethical AI

Against this backdrop, calls for accountable and ethical use of AI have been mounting in recent years. One of the earlier publications locally on this topic was the Ethical Accountability Framework for Hong Kong, China, published by the PCPD in October 2018, which recommended organisations to adhere to three basic Data Stewardship Values (namely, be respectful, beneficial and fair to stakeholders) in the adoption of data-driven technologies. Essential principles and practical guidance specifically targeting AI have also been promulgated by many international bodies such as the European Commission, the Global Privacy Assembly and OECD. The European Commission made a proposal for regulating AI by legislative means in April 2021. If passed, the bill may become the world’s first statutory regulation on AI.  

PCPD’s Guidance

While a consensus regarding whether AI should be regulated by legislation or other means has yet to be reached, it is high time that some practical guidance was provided for organisations in Hong Kong on the ethical development and use of AI.

Expanding from the three Data Stewardship Values of being respectful, beneficial and fair to stakeholders, the Guidance outlines seven ethical principles for AI and provides a four-part practice guide that follows the structure of a general business process to help organisations manage their AI systems throughout its lifecycle. The seven principles of accountability, human oversight, transparency and interpretability, data privacy, fairness, beneficial AI, and reliability, robustness and security are in line with internationally recognised principles in the field.

First and foremost, recognising that buy-in from the top is a critical ingredient of success, the Guidance recommends that organisations should establish an internal governance structure to steer their development and use of AI. This may comprise an organisational level AI strategy and an AI governance committee. It is also desirable to cultivate an ethical and privacy-friendly culture by conveying to all personnel the importance of ethical AI values.

Second, the Guidance recommends organisations to conduct early and comprehensive risk assessments to identify and evaluate the privacy and ethical risks of the use of AI in order to deploy appropriate risk mitigation measures. Risk factors to consider include the volume and sensitivity of the data used, the accuracy and reliability of the data, the potential impact of AI on stakeholders, the significance of the impact should it occur, and the likeliness of the impact occurrence, and more. In a word, organisations should adopt a risk-based approach in managing AI.

Organisations should determine, from the assessed level of risk, the extent of human participation appropriate for the decision-making process of an AI system.

The third part of the practice guide focuses on the development and management of AI systems. Data processing in the development of AI models has to comply with the relevant requirements of the PDPO insofar as personal data is involved. AI models should be rigorously tested before use and regularly tuned after deployment to ensure robustness and reliability. Security measures should be in place to protect AI systems and data against attacks and leakages. Periodic review of risks, as well as re-training or fine-tuning of the AI models, is recommended to ensure that the models remain reliable.

Last but not least, organisations’ use of AI should be transparent to stakeholders in order to demonstrate the organisations’ commitment and adherence to applicable legal requirements and ethical values. Organisations should also provide explanations of decisions made or assisted by AI, as well as channels for individuals involved to correct inaccuracies, provide feedback, seek explanation, request human intervention and/or opt out from interacting with AI, where possible.

In this increasingly data-driven economy, I believe that trust is pivotal to success. I hope that the Guidance will help organisations in Hong Kong manage the privacy and ethical risks associated with AI systems, thereby facilitating the enhanced use of AI, while demonstrating their trustworthiness to consumers and other stakeholders at large, and eventually unlocking the gate to success.