Skip to content

Newspaper Column

PCPD in Media

"Artificial Intelligence: The Model Personal Data Protection Framework" – Privacy Commissioner’s article contribution at Hong Kong Lawyer (Aug 2024)

Since my Office published the Guidance on the Ethical Development and Use of Artificial Intelligence (the ‘2021 AI Guidance’) in 2021, artificial intelligence (AI) has remained in the spotlight. We are now in an era in which the use of ChatGPT has experienced exponential growth and organisations worldwide are racing to leverage the technology, if they have not already experienced it first-hand. According to a survey published by the Hong Kong Productivity Council in October 2023, around 50% of local enterprises are expected to use AI by the end of 2024.   
 
Despite the opportunities that AI offers, we should heed the cautionary words of Stephen Hawking, who once said that ‘success in creating AI could be the biggest event in the history of our civilisation. But it could also be the last, unless we learn how to avoid the risks’. Among the risks posed by AI, privacy stands out as one of the most significant. Governments around the world have taken note of this concern and have recognised the need for targeted regulations on AI and robust frameworks that balance innovation with the protection of personal data privacy, among others. 

Regulatory landscape of AI worldwide 
In response to the controversy over whether to and how to regulate AI, some governments have crafted new regulations tailored to AI, while others are relying on existing laws.
 
In Europe, the world’s first comprehensive regulation designed to regulate the use of AI in all sectors and industries, the Artificial Intelligence Act, has just entered into force on 1 August 2024. The regulatory framework is underpinned by a risk-based approach whereby AI systems are classified according to the intensity and scope of the risks they pose. Stricter rules are imposed on AI systems that pose higher levels of risk, and certain practices deemed to present unacceptable risks (e.g., the use of AI systems for emotional recognition in the workplace or education institutions) are prohibited.  
 
Meanwhile, the Mainland has unveiled a salvo of regulations and administrative measures targeting AI, including the Interim Administrative Measures for Generative Artificial Intelligence Services (《生成式人工智能服務管理暫行辦法》), which took effect in August 2023 to ensure that the provision and use of generative AI services do not infringe, among others, individuals’ privacy, and the ‘Global AI Governance Initiative’ promulgated in October 2023,  whereby the Mainland government emphasised that they value the development and security of AI technology equally. 
 
Across the Pacific, the Biden administration has issued an Executive Order which, inter alia, sets out the disclosure requirements for developers of foundation models that pose serious national security risks to ensure the safe, secure and trustworthy development of AI.  
 
In contrast, the United Kingdom has long been championing a ‘pro-innovation’ approach to AI regulation. However, it is worth-noting that the new government has recently indicated its intention to introduce a legislation that regulates the development of powerful AI models. 
 
Hong Kong: PCPD’s pioneering guidance on AI 
Where does Hong Kong stand in this evolving technological landscape? While Hong Kong has no overarching regulation tailor-made for AI, the Personal Data (Privacy) Protection Ordinance (the ‘PDPO’), as a piece of principle-based and technology-neutral legislation, applies equally to AI. Data users are bound by the provisions of the PDPO, including its Data Protection Principles (‘DPPs’), irrespective of the type and state of the art of the technological means adopted to collect and use personal data.  
 
To better understand the implications of AI for personal data privacy and assess whether organisations’ data protection practices comply with the requirements of the PDPO, my Office carried out compliance checks on 28 local organisations between August 2023 and February 2024. I am pleased to see that no contravention of the PDPO was found. 
 
Beyond regulatory compliance, my Office published the 2021 AI Guidance in the autumn of 2021 to facilitate the healthy development and use of AI in Hong Kong. As one of the earliest regulatory documents on AI in Hong Kong, the 2021 AI Guidance introduced three Data Stewardship Values (being respectful, beneficial, and fair to stakeholders) and seven Ethical Principles for AI (accountability; human oversight; transparency and interpretability; data privacy; fairness; beneficial AI; and reliability, robustness and security), which are internationally well-accepted norms, to guide organisations in developing and using AI in a privacy-friendly manner. 
 
To support the Global AI Governance Initiative promulgated by the Mainland and to underscore the importance of AI security, which is one of the major fields of national security, in June 2024 my Office published the Artificial Intelligence: Model Personal Data Protection Framework (the ‘Model Framework’), which contains a set of recommendations and best practices regarding the protection of personal data privacy for organisations that procure, implement and use any type of AI system, including generative AI. I believe that the Model Framework will help nurture the healthy and safe development of AI in Hong Kong, facilitate Hong Kong’s development as an innovation and technology hub, and propel the expansion of the digital economy both in Hong Kong and the Greater Bay Area.
 
The Model Framework is premised on general business processes and covers recommendations and best practices in the following four areas: (i) AI strategy and governance; (ii) risk assessment and human oversight; (iii) the customisation of AI models and the implementation and management of AI systems; and (iv) communication and engagement with stakeholders.

(i) AI strategy and governance
Firstly, buy-in from top management is critical to procuring, implementing and using AI responsibly and ethically. The Model Framework recommends that organisations establish an internal AI governance strategy to steer the process. The AI governance strategy should contain an organisational-level AI strategy that provides direction on the purposes for which AI systems may be adopted in the organisation. When procuring third-party AI systems, organisations are recommended to take into account governance considerations in their supplier management processes, such as the key privacy and security obligations and ethical requirements to be conveyed to potential AI suppliers, as well as the international technical and governance standards expected of them. In particular, depending on the data that the organisation provides to the AI supplier and the instructions that it gives on AI model development and customisation, certain compliance issues regarding the PDPO should be considered, such as the respective responsibilities of the data user and the data processor (if any) under the PDPO, the legality of cross-border data transfers and data security considerations. 
 
The success of an AI governance strategy lies in the hands of those who execute it. In this regard, a cross-disciplinary and executive-led AI governance committee that steers the implementation of the AI governance strategy is indispensable. However, employees have an important role to play, too, and should be adequately trained on personal data privacy in the context of AI to nurture a privacy-protecting culture with the Ethical Principles for AI in mind.

(ii) Risk assessment and human oversight
Secondly, the Model Framework recommends that organisations conduct comprehensive risk assessments to systematically identify, analyse and evaluate the risks, including privacy risks, involved in the AI lifecycle, so that they can deploy commensurate risk mitigation measures, including the appropriate level of human oversight. 
 
Factors to consider in such risk assessments include the allowable uses of data for customising and using AI, the volume and sensitivity of the data used, the security of the data (especially where transferred into or out of the organisation’s system), the accuracy and reliability of the data, and the adequacy of the risk mitigation measures in place. Once the risks have been identified, organisations should adopt a risk-based approach to managing their AI systems. Based on the risk profile of the AI system being deployed, organisations should then determine the extent of human oversight appropriate for the decision making and output generation process of the system. In general, the higher the risk, the more stringent the human oversight should be. 

(iii) The customisation of AI models and the implementation and management of AI systems
The customisation, implementation and management of AI systems within the AI lifecycle involves the heavy processing of personal data. Attention is therefore required to ensure compliance with the relevant requirements of the PDPO. The Model Framework recommends various measures that can be taken. In general, organisations should minimise the amount of personal data involved in the customisation and use of AI models, taking into account the necessity of collecting adequate data to ensure accurate and unbiased results. 
 
Furthermore, both customised and off-the-shelf AI models should be rigorously validated and tested before use, and regularly fine-tuned after deployment, to ensure their continued robustness and reliability. Where an AI system is integrated into an organisation’s own systems on an on-premises server or on a cloud server provided by a third party, the organisation should consider the compliance and data security implications of such integration. Given the substantial volume of data typically involved and the frequent use of third-party software components and codes in AI, data security measures should be in place to protect AI systems and the underlying data from attacks and data leaks. Regularly re-conducting risk assessments, periodic audit of the AI system and the fine-tuning of the AI models are recommended to ensure that the models remain reliable and the risks manageable. Moreover, as AI systems are prone to errors due to their complexity and may be attractive targets for external attacks, organisations are recommended to consider establishing an AI Incident Response Plan to monitor and address any incidents that may inadvertently occur. 

(iv) Communication and engagement with stakeholders
The last part of the Model Framework recommends that organisations be transparent in their use of AI with stakeholders, in adherence to the Ethical Principle of transparency and interpretability. One way for organisations to achieve this is to make the decisions and output of AI explainable where feasible, particularly when there may be a significant impact on individuals. This will garner the trust of stakeholders. Organisations should also take note of the right of data subjects to submit data access and data correction requests under the PDPO. Last but not least, organisations should consider encouraging stakeholders to provide feedback, which can then be used to adjust the AI systems.  
 
I believe that the adoption of the Model Framework will enable organisations to implement and use AI in a way that complies with the PDPO and its well-established DPPs. Organisations can also consider adopting the Model Framework, or the relevant parts of it, as part of their organisational AI policy. 

The International Dimension
The risks arising from the use of AI have now become a global issue. While globally-aligned AI legislation is yet to crystalise, various jurisdictions have been striving to reach a common understanding of the implications of AI before it is too late.  
 
For example, the General Assembly of the United Nations unanimously adopted the first global resolution on the promotion of ‘safe, secure and trustworthy’ AI systems in March this year. The Bletchley Declaration, endorsed in November 2023 by the European Union and 28 other countries, including the Mainland, the United States and the United Kingdom, establishes a shared understanding of the risks associated with frontier AI. Building on the Declaration, industry leaders such as Microsoft and Meta have undertaken efforts to develop and deploy their frontier AI models in a responsible manner.
 
In parallel, my Office is sparing no effort to contribute to global endeavours to govern AI effectively. In January this year, my Office and the University of Hong Kong co-organised an international conference on AI for academics, AI experts and other stakeholders from around the world to explore AI’s implications for personal data protection. As a member of the Working Group on Ethics and Data Protection in Artificial Intelligence of the Global Privacy Assembly, my Office co-sponsored two resolutions, the Resolution on Generative Artificial Intelligence Systems and the Resolution on Artificial Intelligence and Employment, which were adopted by consensus at the annual conference of the Assembly in October 2023. The resolutions call on AI developers, providers and deployers to establish responsible and trustworthy generative AI systems, and on organisations to adopt ‘privacy by design’ when developing or using AI systems in employment contexts, respectively. 

Conclusion
Going forward, AI will no doubt be an essential driving force in the development of the digital economy in Hong Kong and the Greater Bay Area. As we usher in the AI era, the responsible procurement, implementation and use of AI will remain important boardroom matters. Organisations, as data users, should establish a comprehensive AI strategy and ensure ongoing compliance with the PDPO and the Ethical Principles for AI.

Establishing robust AI and ensuring data security are as crucial as the technological advancements themselves.