Date: 11 June 2024
Privacy Commissioner’s Office Publishes
“Artificial Intelligence: Model Personal Data Protection Framework”
As AI technology rapidly develops, the application of AI has become increasingly prevalent. To address the challenges posed by AI to personal data privacy and to support the “Global AI Governance Initiative” of the Motherland, the Office of the Privacy Commissioner for Personal Data (PCPD) today issued the “Artificial Intelligence: Model Personal Data Protection Framework”.
The Privacy Commissioner for Personal Data (Privacy Commissioner), Ms Ada CHUNG Lai-ling, said, “AI security is one of the major fields of national security. The PCPD published the “Artificial Intelligence: Model Personal Data Protection Framework” (Model Framework) to provide internationally well-recognised and practical recommendations and best practices to assist organisations to procure, implement and use AI, including generative AI, in compliance with the relevant requirements of the Personal Data (Privacy) Ordinance (PDPO), so that organisations can harness the benefits of AI while safeguarding personal data privacy. I believe that the Model Framework will help nurture the healthy development of AI in Hong Kong, facilitate Hong Kong’s development into an innovation and technology hub, and propel the expansion of the digital economy not only in Hong Kong but also in the Greater Bay Area.”
Prof Hon William WONG Kam-fai, MH, Member of the PCPD’s Standing Committee on Technological Developments and Legislative Council member, who wrote the foreword for the Model Framework, said, “This is an opportune moment for the PCPD to publish the Framework, as our Motherland is currently focusing on the pursuit of new quality productive forces and has launched the “Artificial Intelligence +” initiative to foster industrial development through technological innovation. The Framework serves as a useful guidance for enterprises to utilise AI technology, thus promoting industrial innovation and upgrading. In the broader context, the Framework contributes to the development of Hong Kong’s digital economy, helps strengthening the city’s status as a global technology and innovation hub and proactively facilitates its integration with the development of our Motherland.”
The Model Framework published by the PCPD has received support from the Office of the Government Chief Information Officer of the Hong Kong Government and Hong Kong Applied Science and Technology Research Institute. In addition, the PCPD consulted various experts and relevant stakeholders in the drafting of the Model Framework, including members of the Standing Committee on Technological Developments of the PCPD, public organisations, the technology industry, universities and AI suppliers. The PCPD expresses its sincere gratitude to these experts and stakeholders for their support and valuable comments during the process of drafting and publishing the Model Framework.
Specifically, the Model Framework, which is based on general business processes, provides a set of recommendations and best practices regarding governance of AI for the protection of personal data privacy for organisations which procure, implement and use any type of AI systems. The Model Framework aims to assist organisations in complying with the requirements under the PDPO and adhering to the three Data Stewardship Values and seven Ethical Principles for AI advocated in the “Guidance on the Ethical Development and Use of Artificial Intelligence” published by the PCPD in 2021. The Model Framework covers recommended measures in the following four areas (please refer to Annex 1 for a summary of the recommended measures):
-
Establish AI Strategy and Governance: Formulate the organisation’s AI strategy and governance considerations for procuring AI solutions, establish an AI governance committee (or similar body) and provide employees with training relevant to AI;
-
Conduct Risk Assessment and Human Oversight: Conduct comprehensive risk assessments, formulate a risk management system, adopt a “risk-based” management approach, and, depending on the levels of the risks posed by AI, adopt proportionate risk mitigating measures, including deciding on the level of human oversight;
-
Customisation of AI Models and Implementation and Management of AI Systems: Prepare and manage data, including personal data, for customisation and/or use of AI systems, test and validate AI models during the process of customising and implementing AI systems, ensure system security and data security, and manage and continuously monitor AI systems; and
-
Communication and Engagement with Stakeholders: Communicate and engage regularly and effectively with stakeholders, in particular internal staff, AI suppliers, individual customers and regulators, in order to enhance transparency and build trust.
To assist organisations in understanding the Model Framework, the PCPD has also published a leaflet setting out some key recommendations extracted from the Model Framework.
Download the “Artificial Intelligence: Model Personal Data Protection Framework”:
https://www.pcpd.org.hk/english/resources_centre/publications/files/ai_protection_framework.pdf
Download the Leaflet on the “Artificial Intelligence: Model Personal Data Protection Framework”:
https://www.pcpd.org.hk/english/resources_centre/publications/files/leaflets_protection_framework.pdf
“Artificial Intelligence: Model Personal Data Protection Framework”.
Ms Ada CHUNG Lai-ling, Privacy Commissioner, published the “Artificial Intelligence: Model Personal Data Protection Framework”.
Prof Hon William WONG Kam-fai, Member of the PCPD’s Standing Committee on Technological Developments and Legislative Council member, spoke at the media briefing on the publication of the “Artificial Intelligence: Model Personal Data Protection Framework”.
Prof Hon William WONG Kam-fai (second right), Member of the PCPD’s Standing Committee on Technological Developments and Legislative Council member, Mr Anthony CHIU Shin-hang (first left), Assistant Government Chief Information Officer (IT Infrastructure), Office of the Government Chief Information Officer, Ms Ada CHUNG Lai-ling (second left), Privacy Commissioner, and Mr Alan CHEUNG (first right), Chief Director, Artificial Intelligence and Trust Technologies, Hong Kong Applied Science and Technology Research Institute, attended the media briefing.
Prof Hon William WONG Kam-fai (third right), Member of the PCPD’s Standing Committee on Technological Developments (SCTD) and Legislative Council member, Ms Ada CHUNG Lai-ling (middle), Privacy Commissioner, Mr Anthony CHIU Shin-hang (third left), Assistant Government Chief Information Officer (IT Infrastructure), Office of the Government Chief Information Officer, other members of the SCTD, including Mr Alan CHEUNG (second right), Ir Alex CHAN (second left), Adjunct Professor Jason LAU (first right) and Ms Cecilia SIU Wing-sze (first left), Assistant Privacy Commissioner for Personal Data (Legal, Global Affairs & Research), attended the media briefing.
-End-
Annex 1
“Artificial Intelligence: Model Personal Data Protection Framework”
Summary
The Model Framework covers recommended measures in the following four areas:
-
Establish AI Strategy and Governance:
-
Organisations should have an internal AI governance strategy, which generally comprises (i) an AI strategy, (ii) governance considerations for procuring AI solutions, and (iii) an AI governance committee (or similar body) to steer the process, including providing directions on the purposes for which AI solutions may be procured, and how AI systems should be implemented and used;
-
Consider governance issues in the procurement of AI solutions, including whether the potential AI suppliers have followed international technical and governance standards, the general criteria for submission of an AI solution to the AI governance committee (or similar body) for review and the relevant procedures, any data processor agreements to be signed, and the policy on handling the output generated by the AI system (e.g. employing techniques to anonymise personal data contained in AI-generated content, label or watermark AI-generated content and filter out AI-generated content that may pose ethical concerns);
-
Establish an internal governance structure with sufficient resources, expertise and authority to steer the implementation of the AI strategy and oversee the procurement, implementation and use of the AI system, including establishing an AI governance committee (or similar body) which should report to the board, and establish effective internal reporting mechanisms for reporting any system failure or raising any data protection or ethical concerns to facilitate proper monitoring by the AI governance committee; and
-
Provide AI-related training to employees to ensure that they have the appropriate knowledge, skills and awareness to work in an environment using AI systems. For instance, for AI system users (including operational personnel in the business), the training topics may include compliance with data protection laws, regulations and internal policies, cybersecurity risks, and general AI technology.
-
Conduct Risk Assessment and Human Oversight:
-
Comprehensive risk assessment is necessary for organisations to systematically identify, analyse and evaluate the risks, including privacy risks, involved in the procurement, use and management of AI systems. Factors that should be considered in a risk assessment include requirements of the PDPO; the volume, sensitivity and quality of data (including personal data); security of data; the probability that privacy risks (e.g. excessive collection, misuse or leakage of personal data) will materialise and the potential severity of the harm that might result. For example, an AI system which assesses the credit worthiness of individuals tends to carry a higher risk than an AI system used to present individuals with personalised advertisements because the former may deny them access to credit facilities which, generally speaking, has a higher impact than the latter;
-
Adopt risk management measures that are proportionate to the relevant risks, including deciding on an appropriate level of human oversight, for example, “human-out-of-the-loop” (where AI makes decisions without human intervention), “human-in-command” (where human actors oversee the operation of AI and intervene whenever necessary), and “human-in-the-loop” (where human actors retain control in the decision-making process to prevent and/or mitigate improper output and/or decisions by AI); and
-
When seeking to mitigate AI risks to comply with the Ethical Principles for AI, organisations may need to strike a balance when conflicting criteria emerge and make trade-offs between the criteria. Organisations may need to consider the context in which they are deploying the AI to make decisions or generate contents and thus decide how to justifiably address the trade-offs that arise. For example, explainability may be relatively important in a context where a decision of an AI system affects a customer’s access to services, and a human reviewer who performs human oversight would need to explain the AI system’s decision to the customer.
-
Execute Customisation of AI Models and Implementation and Management of AI Systems:
-
Adopt measures to ensure compliance with the requirements of the PDPO when preparing and managing data (including personal data) for customisation and/or use of the AI model, such as using the minimum amount of personal data, ensuring the quality of data, and properly documenting the handling of data for the customisation and use of AI;
-
During the process of customisation and implementation of the relevant AI system, validate it with respect to privacy obligations and ethical requirements including fairness, transparency and interpretability; test the AI model for errors to ensure reliability, robustness and fairness; and perform rigorous User Acceptance Test;
-
Ensure system security and data security, such as implementing measures (e.g. red teaming) to minimise the risk of attacks against machine learning models; implementing internal guidelines for staff on the acceptable input to be fed into and the permitted / prohibited prompts to be entered into the AI system; and establishing mechanisms to enable the traceability and auditability of the AI system’s output;
-
Manage and continuously monitor AI system, adopt review mechanism (including conducting re-assessments of the AI system to identify and address new risks, especially when there is a significant change to the functionality or operation of the AI system or to the regulatory or technological environments);
-
Establish an AI Incident Response Plan, encompassing elements of defining, monitoring for, reporting, containing, investigating and recovering from an AI incident; and
-
Internal audits (and independent assessments, where necessary) should be conducted regularly to ensure that the use of AI continues to comply with the requirements of the relevant policies of the organisation and align with its AI strategy.
-
Foster Communication and Engagement with Stakeholders:
-
Communicate and engage regularly and effectively with stakeholders, in particular internal staff, AI suppliers, individual customers and regulators, in order to enhance transparency and build trust;
-
Handle data access and correction requests, and provide feedback channels;
-
Provide explanations for decisions made by and output generated by AI, disclose the use of the AI system, disclose the risks, and consider allowing opt-out; and
-
Use plain language that is clear and understandable to lay persons when communicating with stakeholders, particularly consumers.
-End-