Date: 21 February 2024
Implications of the Development or Use of Artificial Intelligence
on Personal Data Privacy
The Privacy Commissioner’s Office has Completed Compliance Checks
on 28 Organisations
With the development and use of Artificial Intelligence (AI) becoming increasingly common in Hong Kong, organisations may collect, use or process personal data when they develop or use AI systems, thereby posing risks to personal data privacy. In order to understand the implications of the development and use of AI on personal data privacy in Hong Kong,
the Office of the Privacy Commissioner for Personal Data (PCPD) carried out compliance checks on 28 local organisations (Organisations) from August 2023 to February 2024 to understand their practices in relation to the collection, use and processing of personal data in the development or use of AI, as well as the AI governance structure of the relevant organisations.
The exercise covered various sectors, including telecommunications, finance and insurance, beauty services, retail, transportation and education sectors, and government departments. Based on the findings of the compliance checks, the PCPD has the following overall observations as regards the Organisations’ data protection practices when they develop or use AI:
-
21 organisations used AI in their day-to-day operations, which included using AI in data analysis, assessing interview performance of job candidates, and utilising chatbots to respond to customer enquiries, etc.;
-
Among the 21 organisations, 19 of them established internal AI governance frameworks, such as setting up an AI governance committee and/or appointed designated officer to oversee the development or use of AI products or services;
-
Only 10 out of the 21 organisations collected personal data through AI products and services. The 10 organisations provided data subjects with Personal Information Collection Statements on or before the collection of their personal data, which specified the purposes for which the data is to be used, as well as the classes of person to whom the data may be transferred;
-
Eight out of the 10 organisations had conducted privacy impact assessments prior to the development or use of AI products and services;
-
All of the 10 organisations implemented appropriate security measures to ensure that the personal data held by them was protected against unauthorised or accidental access, processing, erasure, loss or use in the course of the development or use of AI products or services. These measures included granting access to personal data to authorised personnel only, encrypting personal data at rest and in transit, conducting regular security vulnerability assessments and penetration tests, or providing employees with written guidelines and trainings; and
-
Among the 10 organisations, nine of them retained personal data collected through the AI products or services. Out of these, eight organisations specified retention periods for personal data and would delete or anonymise the data when the original purpose of collection has been achieved. The remaining organisation allowed data subjects to delete their personal data themselves.
The PCPD has now completed the compliance checks and has found no contravention of the Personal Data (Privacy) Ordinance (PDPO) during the compliance check process. The results of this compliance check exercise demonstrate that there is an increasing number of organisations (including both public and private organisations) deploying AI to enhance their daily operational efficiency.
The Privacy Commissioner for Personal Data, Ms Ada CHUNG Lai-ling, said, “While AI has immense potential for driving productivity and economic growth, it also poses varying degrees of personal data privacy and ethical risks. I am pleased to learn that among the organisations reviewed, most organisations established internal AI governance frameworks to oversee the development or use of AI products or services. Organisations, as data users, bear the responsibility to ensure data security when they develop or use AI systems. They should review and assess the impacts of AI systems on personal data privacy in a timely manner to ensure compliance with the relevant provisions of the PDPO.”
The PCPD issued the “Guidance on the Ethical Development and Use of Artificial Intelligence” in August 2021 to facilitate the healthy development and use of AI in Hong Kong as well as assist organisations to mitigate privacy and ethical risks in complying with the relevant provisions of the PDPO in their development or use of AI.
Through this compliance check exercise, the PCPD would like to provide the following recommended measures to organisations which develop or use AI:
-
If an organisation collects or processes personal data in the development or use of AI, it should adopt measures to ensure compliance with the PDPO, as well as monitor and review AI systems on a continuing basis;
-
Establish a strategy for the development or use of AI and an internal AI governance structure, and provide adequate training to all relevant personnel;
-
Conduct comprehensive risk assessment (including privacy impact assessment) to systematically identify, analyse and evaluate the risks, including privacy risks, in relation to the development or use of AI, and adopt appropriate risk management measures that are commensurate with the risks, for instance, adopt a higher level of human oversight for an AI system with a higher risk profile; and
-
Communicate and engage effectively with stakeholders to enhance transparency in the use of AI, and fine-tune AI systems in response to concerns raised by stakeholders.