To harness the benefits of artificial intelligence (AI), organisations across the globe have been exploring ways to integrate AI into their operations in order to optimise efficiency and leverage algorithms to perform granular analyses. This phenomenon is evident in Hong Kong too.
About 41 per cent of Hong Kong firms are applying or planning to apply AI technologies, according to a study released in March by the Hong Kong Productivity Council and the Hong Kong Institute of Economics and Business Strategy at the University of Hong Kong’s Business School. It found that the average investment in AI is HK$830,000.
Organisations are increasingly using AI to help in tasks from identifying new compounds for medical treatment and making investment choices to analysing consumer behaviour and delivering tailored content to customers. Employees are also embracing the convenience offered by AI-powered personal assistants.
I don’t need to look into a crystal ball to believe that many aspects of our lives will, to some extent, become AI-assisted, if not entirely AI-driven. It is beyond dispute that the era of AI has arrived, and that AI will fundamentally reshape our future.
But for AI to be a positive game-changer, safety is an essential consideration. As Premier Li Qiang put it at the recent World Economic Forum annual meeting: “Like other technologies, AI is a double-edged sword. If it is applied well, it can do good and bring opportunities to the progress of human civilisation and provide great impetus to the industrial and scientific revolution.”
The rise of AI presents some of the thorniest challenges. When this technology falls into the wrong hands, it can be a nightmare. In Hong Kong, digital impersonations of senior officers at multinational companies have tricked employees into transferring funds into fraudsters’ accounts. AI’s creative ability to generate photos and videos in seconds has also been used by scammers to orchestrate deepfake scams.
AI also often attracts criticism for the privacy and ethical issues associated with its application. For example, the lack of transparency in AI models, from training to deployment, raises concerns that personal data may be harvested and used without consent. Data inputs to generative AI chatbots may also be stored on external servers over which organisations may not have direct control. And a recruitment process solely driven by AI may reinforce gender or racial biases, especially when the AI models have been trained on unrepresentative data sets.
Given the profound implications stemming from AI applications, last October China announced its Global AI Governance Initiative, which advocates that the development and safety of AI be treated as being of equal importance. A month later, the Bletchley Declaration, endorsed by the European Union and 28 countries including China, delivered a powerful statement recognising “a unique moment to act and affirm the need for the safe development of AI”.
Managing the risks of AI is no easy task. Organisations may find it challenging to grapple with the regulatory landscape, given the complexity and novelty of AI technology.
Organisations in Hong Kong that develop, customise or use AI systems that involve personal data are duty-bound to comply with the Personal Data (Privacy) Ordinance. The application of the ordinance, as a piece of technology-neutral legislation, is not affected by the technology employed. In other words, there is no lacuna. The ordinance applies equally to the handling of personal data by AI.
With this in mind, my office recently published the “Artificial Intelligence: Model Personal Data Protection Framework” to provide internationally well-established and practicable recommendations as well as best practices to help organisations in the procurement, implementation and use of AI, including generative AI, in compliance with the ordinance.
This model framework covers recommended measures for four general business processes: establishing AI strategy and governance, conducting risk assessment and human oversight, customising AI models and implementing and managing AI systems, as well as communicating and engaging with stakeholders.
Companies may be concerned about whether the adoption of the model framework would increase compliance costs. On the contrary, we believe that adoption of the framework would help to reduce compliance costs.
Indeed, the framework provides a step-by-step guide on the considerations and measures to be taken throughout the life cycle of AI procurement, implementation and use, which would materially reduce the need for organisations to seek external advice from system developers, contractors or even professional service providers.
Moreover, in line with international practice, the framework recommends that organisations adopt a risk-based approach, implementing risk management measures that are commensurate with the risks posed, including an appropriate level of human oversight. This enables organisations to save costs by focusing resources on oversight of higher-risk AI applications.
Thus, the model framework has been introduced to facilitate the implementation and use of AI in a safe and cost-effective manner rather than inhibiting its use.
As Hong Kong is poised to become an international innovation and technology hub, I believe that the model framework will help to nurture the healthy and safe development of AI in Hong Kong and propel the expansion of the digital economy throughout the Greater Bay Area.
AI is set to transform almost every facet of our lives. Whether AI becomes a game-changer for better or for worse hinges on our actions today. By fostering the safe and responsible use of AI, together we can build a trustworthy AI-driven world.