Hong Kong, China: PCPD’s Model Framework Helps Organizations Using AI Ensure Compliance
Along with the growth and increasing prevalence of artificial intelligence (AI), including generative AI, the privacy and ethical risks brought along by the new technology cannot be understated. Ada Chung Lai-Ling, Privacy Commissioner for Personal Data (PCPD), Hong Kong, China, looks at what steps organizations can take to ensure compliance and the guidance offered by the PCPD to help this.
Last year, a software engineer at a South Korean tech giant uploaded confidential source code to an AI chat‐bot. Although the generative AI tool worked wonders, the company took notice and swiftly banned the use of AI chatbots, citing the risk of sensitive information being stored on external servers, potentially accessible by unauthorized users or re-purposed for training other AI models.
This is not an isolated incident. A 2024 study by a cybersecurity firm found that almost 75% of workers who used ChatGPT at work did so with non-corporate accounts, which offer fewer security and privacy controls than the enterprise versions. Worryingly, 27.4% of the corporate data that employees put into AI platforms is sensitive, including source code, customer support information, and research and development data. The Dutch data protection authority (AP) has recently received notifications of several data breaches caused by employees sharing personal data with AI chatbots, including a doctor who entered patient data and a tele‐com employee who input customer addresses into an AI chatbot.
The prevalence of staff who use AI tools underscores a critical challenge in the age of AI: How can organizations innovate with AI while safeguarding personal data privacy? As privacy and ethical risks loom large with the use of AI, what steps must be taken to navigate these challenges?
The awareness-action gap
The good news is that organizations are not oblivious to such risks. A 2023 survey by the Office of the Privacy Commissioner for Personal Data (PCPD), Hong Kong, and the Hong Kong Productivity Council (HKPC), for example, showed that among emerging technologies, local companies considered generative AI to pose the highest privacy risk. A global study published in 2024 echoed this finding, with 92% of the organizations recognizing generative AI as fundamentally different from previous technologies, requiring updated data and risk management strategies.
Consumers also share these concerns. A poll found that 62% of consumers worried about how businesses used their personal data for AI, and 60% had lost trust in organizations because of their AI practices. These concerns are not unfounded. I highlighted several privacy and ethical risks in a former article, Hong Kong: The privacy and ethical risks of generative AI cannot be ignored, including the collection of training data without informed consent, the potential use of user conversations for AI training, and the rights of individuals to access and modify their data. Other identified risks include data security vulnerabilities, discriminatory outputs, and copyright issues.
Despite widespread recognition of these risks, many organizations are ill-prepared to address them. The PCPD–HKPC survey found that fewer than half (48%) of the companies using emerging technologies had established internal guidelines for managing privacy risks. Furthermore, 55% of the surveyed small- and medium-sized enterprises (SMEs) had yet to implement a Personal Data Privacy Management Program.
From awareness to action
As organizations stand on the brink of an AI revolution, they must adopt a structured approach to data governance and data protection. The stakes could not be higher. AI has been compared to steam engines, the printing press, and electricity because of its potential to transform industries. Yet, the development and use of the new technology must be harnessed in order to unleash its greatest potential, while at the same time minimizing the risks involved. Jurisdictions around the world have taken various approaches to AI regulation, from dedicated laws to self-regulation.
In Hong Kong, the Personal Data (Privacy) Ordinance (PDPO), which is a principle-based and technology-neutral piece of legislation, applies to the collection, holding, processing, and use of personal data, whether through the use of AI or not. To assist organizations in navigating the complexities of AI governance, the PCPD issued guidance materials on AI, including the ‘Guidance on the Ethical Development and Use of Artificial Intelligence’ in 2021 (the Guidance) and the ‘Artificial Intelligence: Model Personal Data Protection Framework’ (the Model Framework) in June 2024.
The Guidance advocates three Data Stewardship Values and seven Ethical Principles for AI, in line with international standards. The PCPD published the Model Framework, which is premised on general business processes, in 2024 to foster the healthy and safe development of AI in Hong Kong and facilitate Hong Kong’s development into an innovation and technology hub. The Model Framework, which reflects the values and principles advocated for in the Guidance, offers a set of recommendations and best practices regarding the protection of personal data privacy for companies that procure, implement, and use any type of AI systems, including generative AI.
The Model Framework covers four key areas:
1. AI strategy and governance:
Buy-in from and active participation by top management (such as C-level executives or board directors) is pivotal to the formulation of AI strategy and governance. An internal AI governance strategy should be de‐vised on an organizational-level AI roadmap listing the directions and purposes for which AI systems may be adopted. Beyond strategic alignment, the Model Framework recommends that a company consider governance issues in its supplier management processes when procuring third-party AI systems, such as the key privacy and security obligations or ethical requirements to be conveyed to potential AI suppliers. The establishment of an internal AI governance committee (or similar body) and the provision of adequate training to employees are also critical in cultivating a privacy-protecting culture within the company.
2. Risk assessments and human oversight:
The Model Framework recommends that companies conduct comprehensive risk assessments to systematically identify, analyze, and evaluate the risks, including privacy risks, involved in the AI lifecycle, such that corresponding risk mitigation measures, including the appropriate level of human oversight, can be deployed. For uses of AI that may incur higher risks than others, such as the real-time identification of individuals using biometric data, an evaluation of individuals’ creditworthiness for making automated financial decisions, or an assessment of employees' job performances, companies may wish to adopt a ‘human-in-the-loop’ approach so that human actors retain control in the decision-making process to prevent or mitigate errors made by the AI.
3. Customization of AI models and implementation and management of AI systems:
As heavy processing of personal data is inevitable in the AI lifecycle, companies are recommended to take various measures to ensure compliance with the PDPO. Specifically, when customizing and implementing AI solutions, companies should minimize the amount of personal data involved and ensure that both customized and off-the-shelf AI models are rigorously validated and tested before use.
For instance, when a company is considering purchasing a third-party-developed AI chatbot on its online re‐tail platform, information related to its customers’ purchases and browsing histories may be needed to fine-tune the chatbot. However, other personal data, such as the customers’ names, contact details, and other demographic characteristics, are not necessary. Furthermore, the Model Framework recommends that companies establish an AI Incident Response Plan to monitor and address AI incidents that may inadvertently occur.
4. Communication and engagement with stakeholders:
Companies’ ongoing communication with stakeholders instills trust. Therefore, companies should encourage feedback from all stakeholders and strive to make their AI-generated decisions and output as clear and explicable as possible. Contrary to claims that it would increase compliance costs, I believe that the adoption of the Model Framework would help reduce compliance costs, as it would materially reduce the need for companies to seek external advice from system developers, contractors, or even professional service providers and allow them to focus their resources on high-risk areas.
I believe that adopting the Model Framework as part of their organizational AI policy would enable organizations to implement and use AI in a way that complies with the PDPO.
Together, let’s create a safer AI world
In 2016, physicist Stephen Hawking warned that ‘the rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which.’ I am confident that if we work together to ensure proper AI governance and AI security, we can get the best out of AI so that the worst risks of AI will re‐main as theories rather than realities.
The time to act is now. By emphasizing and adhering to strong AI governance, organizations can ensure compliance with data protection laws while gaining a competitive edge in the fast-evolving AI landscape.
Ada Chung Lai-Ling, Privacy Commissioner for Personal Data, Hong Kong, China