Skip to content

Newspaper Column

PCPD in Media

AI and Ethics: Ensuring the Responsible Use of Generative AI in Banking

With generative AI (genAI) adoption becoming widespread, and opportunities to use genAI in the banking and finance sector continuing to proliferate, this article seeks to deliberate on its ethical and responsible use.

In November 2022, OpenAI debuted ChatGPT. Since then, numerous chatbots powered by genAI have emerged – such as Baidu’s ERNIE Bot, Anthropic’s Claude and Google’s Bard – allowing the general public to interact with AI in a conversational manner. Despite a recent dip in traffic, genAI chatbots remain popular. In August 2023 alone, ChatGPT drew over 1.4 billion visits worldwide.

The banking sector stands to benefit from genAI’s transformative power. A study by consultancy firm McKinsey estimates that full adoption of the technology’s potential use could deliver up to USD340 billion of value, or 4.7% of the banking industry’s annual revenue if the use cases McKinsey used to form the estimate were fully implemented. Research by another consultancy firm, Accenture, suggests that 54% of the current tasks in the banking sector have potential for transformation using genAI. Despite its potential, however, genAI poses privacy and ethical risks. Furthermore, if left unchecked, these risks could mutate into serious issues.

GenAI gaining ground in the banking industry
First, what is genAI? According to an answer provided by ChatGPT, it is “a subset of artificial intelligence that uses machine learning techniques to generate new data or content, such as images, text or music that is similar to or based on the training data it has been fed”.

Banks have long deployed chatbots to handle straightforward customer queries. As mobile platforms flourished in the 2010s, AI chatbots evolved to offer direct assistance to users or to guide them to relevant help resources.

GenAI, however, is revolutionising this domain. The appeal of genAI is multifaceted: it can deliver personalised customer support, provide contextual marketing ideas, ensure roundthe-clock services, and help staff better understand customers’ needs. In fact, an investment bank has just lodged a patent application for a genAI service for equity selection.

In the banking and finance environment, genAI covers the following areas:

1. Risk management and compliance: GenAI has proven to be valuable in risk management. GenAI models can enhance anti-money laundering surveillance, spot patterns of suspicious transactions, raise alerts promptly and minimise missing cases. An example is a US-based bank that incorporated genAI into its fraudulent and suspicious activity detection systems, resulting in a marked decrease in false positives and improved fraud detection rates. Moreover, genAI can simplify the compilation of suspicious transaction reports and strengthen staff training by simulating realistic cases. In compliance, genAI can help decipher dense regulatory texts, compare regulations across jurisdictions, and highlight areas for improvement.

2. Business enhancement: From a business standpoint, genAI helps banks in predicting economic trends and simulating practices at a macro level. At a micro level, it may influence every step of the lending process, from borrower analysis to market research and regular reviews.

3. Operational improvements: Operational improvements can be achieved in various business areas, including administration, human resources, technology, procurement and legal. Although these advancements are not exclusive to banking, the sector’s innovative nature often paves the way for early adoption. A prime example of the power of genAI is content writing, which enables the generation of pitch books with insights drawn from a vast array of resources, resulting in significant time and resource savings.

Privacy pitfalls
While banks traditionally enjoy a high degree of trust from their customers in handling personal data, the introduction of genAI presents new privacy and ethical concerns.

To dissect these issues, it is possible to map them against the Data Protection Principles (DPPs) in the Personal Data (Privacy) Ordinance that cover the entire lifecycle of personal data from collection, holding, processing, use to deletion.

Data Collection
The first privacy risk relates to data collection. GenAIpowered chatbots use “deep learning technology”, which involves analysing massive volumes of unstructured data without supervision.

Whether banks build their own generative models or finetune existing models with their specific data, the process requires a vast amount of data, including personal data, which might originate from transaction records, customer profiles, financial statements and more. As the original purpose of collecting such personal data may not have included AI model training, this type of use potentially circumvents the DPPs governing collection and transparency (DPPs 1 and 5), which require personal data to be collected in a fair manner and on an informed basis.

Use limitation
The second privacy risk pertains to interactions with genAI tools.

User inputs, potentially encompassing sensitive personal data such as names, identity card numbers and account numbers, might be used beyond their original purpose and used as training data for the models. For instance, a virtual assistant could compare users’ inputs with data from other conversations to better understand customers’ needs. Such potential misuse might contravene the use limitation principle (DPP3), which stipulates that personal data should not be used for a new purpose without the prescribed consent of the data subject.

Furthermore, there is a risk of sensitive information leakage in the models’ outputs. With the new training data mentioned above, the system might fine-tune its responses, inadvertently leaking personally identifiable information.

Access to and correction of data
The third privacy risk relates to the challenges over the rights of data subjects to access and correct their personal data (DPP6) and the retention of the data (DPP2). Given the sheer volume of training data, whether it is practicable for users to access or correct their data remains an issue.

Data security
Another privacy risk concerns data security. This arises from the storage of numerous user conversations in the systems. Furthermore, genAI systems might be attacked by “jailbreaking” prompts, which are specifically designed to bypass safeguards for malicious ends and might cause operational issues or data leakage. This would violate the data security principle (DPP4), which requires personal data to be protected against unauthorised or accidental access, processing, erasure, loss or use.

Broader ethical implications
Beyond privacy risks, genAI also poses broader ethical risks, including the following.

1. Explainability: While decisions made by traditional AI are often hard to interpret, this challenge is intensified in genAI. These models are trained on vast and diverse datasets, making it extremely difficult to trace certain outputs to specific data inputs, let alone to explain the technical process to lay users

2. Accuracy: GenAI can produce confident but incorrect statements. In banking, this might entail a misestimated loan risk, or erroneous outputs to regulators. Another cause of genAI’s inaccuracy is its underperformance in numerical analysis compared with text. This is a challenge for banks, where numbers are of the essence.

3. Risk amplification: A note published by the International Monetary Fund suggests that the use of genAI could escalate systemic risks. This stems from the “herd mentality” that arises from different banks using the same systems, and a potential inclination towards high-risk suggestions from the models due to a lack of robust risk management during training.

Navigating the changes: Existing guidance and emerging AI regulations
Responding to the risks presented by the development and use of AI, authorities around the world have proposed or have already put in place regulations and laws.

In Mainland China, the “Interim Measures for the Management of the Services by Generative AI”, the world’s first regulations specific to AI-generated content, became effective in August 2023. The measures set out the obligations of genAI service providers to ensure, among others, the quality of training data and to adopt measures to prevent minors from becoming addicted to AI-generated content.

Elsewhere, the EU is planning to regulate AI, including genAI, with an Artificial Intelligence Act, which, if enacted, will introduce a risk-based approach and a new regulator. In Canada, the AI and Data Act is being considered, while the UK launched a consultation on its pro-innovation regulatory AI approach in March 2023.

Despite the variation in approaches, the common goal of these jurisdictions is to create an environment in which AI can evolve and operate in a manner that respects privacy, protects data, curtails bias and champions transparency.

Apart from regulations, governments and regulators have issued guidances and recommendations on the development and deployment of AI.

In August 2021, the Office of the Privacy Commissioner for Personal Data, Hong Kong (PCPD) published the “Guidance on the Ethical Development and Use of Artificial Intelligence” (Guidance). The Guidance outlines frameworks for deploying AI in a privacy-friendly manner that minimises ethical risks, while striking a balance between fostering innovation and ensuring the protection of personal data.

To this end, the PCPD proposed three sets of recommendations: three data stewardship values, seven ethical principles and a four-step practice guide. The data stewardship values – being respectful, beneficial and fair – underscore the importance of treating individuals as human beings, not as mere data sets. AI should work for the benefit of the broader community while minimising any possible harm.Fairness shouldpermeate throughout the process of using the technology and the results it generates. Any differential treatments should be justified.

The seven ethical principles of accountability; human oversight; transparency and interpretability; data privacy; fairness; beneficial AI; and reliability, robustness and security, align with internationally recognised principles in this area.

The four-step practice guide recommends various safeguards on the basis of four business processes, namely, the establishment of an internal governance structure, conducting comprehensive risk assessments, execution of AI model development and system management, and fostering of communication with stakeholders, including the organisation’s employees and customers.

In addition to the Guidance, developers or users of AI systems are also recommended to adopt a personal data Privacy Management Programme (PMP), a management framework for the responsible collection, holding, processing and use of personal data by an organisation.

The PMP comprises three components. The first component is organisational commitment, which hinges on top-management buy-in, the appointment of a dedicated data protection officer, and the creation of a reporting mechanism directly to senior management.

The second component is programme controls, which encapsulate control measures such as the formation of a personal data inventory, the establishment of data handling policies, and the application of risk assessment tools, among others.

The third component is ongoing assessment and revision of the PMP. Organisations should devise a plan for oversight and review, and periodically revise their programme controls.

The three components collectively enhance data security for any organisation, allow effective mitigation of the privacy risks associated with the use of AI, ensure compliance with the requirements of the Personal Data (Privacy) Ordinance, and build trust with stakeholders

Joining hands in ensuring the ethical and responsible use of AI
More than 10 years ago, the emergence of smartphones gave rise to internet banking apps, which were initially met with customer hesitation owing to fears of security risks such as identity theft and insecure data storage. However, as measures, including robust regulations, have been introduced to better safeguard personal data, customer acceptance has increased and mobile banking has become increasingly common.

Today, as genAI advances, the banking industry and society at large are again confronted with a crucial decision. Should we shun genAI just because it poses potential risks, thus stripping customers of a chance to enjoy better services? Or should we deploy the technology in a manner that ensures its ethical and responsible use, thereby harnessing the technology’s benefits without compromising our valued principles and rights? The answer is clear. Together, let us collaboratively craft a proper regulatory framework and establish norms to enable the development and use of AI in a privacy-friendly and ethical manner.