News & Publications


Back to all News

An Overview of the AI Regulatory Framework in Hong Kong



Aug 07, 2023

Introduction

The advent of artificial intelligence (“AI”) has prompted much debate as to how this will revolutionise the way in which we work and live – from personalised advertisements, “smart” homes and cities, to AI-assisted diagnosis and treatment of life-threatening diseases. Whilst AI technology is rapidly evolving and developers and businesses are racing to bring nascent AI applications and services to the market, the current laws in Hong Kong may be incapable of fully addressing the effects, implications and potential ramifications of AI.

This article provides an overview of the current AI regulatory framework in Hong Kong in the following areas:

I. Data protection and privacy: Processing of personal data by AI systems and guidelines on the development and use of AI

II. Intellectual property (“IP”): Potential infringement of IP by AI systems and ownership of AI-generated IP

III. Operation and deployment of AI systems and applications: Hong Kong’s position and future directions

IV. Potential bias of AI systems: Existing safeguards under Hong Kong laws

V. Industry guidance: Industry-specific guidance from the Hong Kong Monetary Authority (“HKMA”) and the Securities and Futures Commission (“SFC”)

I. Data protection and privacy: Processing of personal data by AI systems and guidelines on the development and use of AI

Personal Data (Privacy) Ordinance (Cap. 486) (“PDPO”)

In Hong Kong, data protection and privacy are primarily governed by the PDPO. It requires data users to comply with six data protection principles (“DPPs”) for the gathering, processing and usage of personal data. Notably, the PDPO protects only personal data. It does not protect or regulate other data.

“Data user” is defined in the PDPO as a person who, either alone or jointly with other persons, controls the collection, holding, processing or use of personal data. An operator of an AI system which collects and processes personal data in Hong Kong would be a data user and should comply with the data protection principles.

Below are some examples of how organisations developing and using AI systems may comply with the DPPs as data users:

  • DPP 1: If personal data will be used to train an AI system, data subjects should be explicitly or implicitly informed on or before collecting the data. Personal data should not be used to train an AI model without informing the data subject.
  • DPP 2: Personal data from the AI system should be erased if no longer necessary for the development and use of AI.
  • DPP 3: The personal data collected must not be used for a new purpose that is different from and unrelated to the original purpose of collection. If previously collected personal data is used for a new purpose (e.g. AI model training), the data user should either obtain the express and voluntary consent of the data subject or the personal data should be anonymised.

Breach of a data protection principle per se is not an offence. However, if a data user is found to be in breach of the PDPO, the Privacy Commissioner for Personal Data may serve an enforcement notice on the data user directing remedial and/or preventive steps. Contravention of such enforcement notice constitutes an offence under section 50A(1) of the PDPO and may result in a maximum fine of $50,000 and imprisonment for 2 years, along with a daily penalty of $1,000.

Further, a data user disclosing personal data without consent from the data subject with an intent to obtain monetary gain (or to cause monetary or property loss to the data subject) constitutes a criminal offence contrary to section 64(1) of the PDPO and is subject to a fine of $1,000,000 and imprisonment for 5 years on conviction.

Guidance on the Ethical Development and Use of Artificial Intelligence

To assist organisations with understanding and complying with the PDPO provisions when developing and using AI, the Office of the Privacy Commissioner for Personal Data (“PCPD”) published the Guidance on the Ethical Development and Use of Artificial Intelligence in August 2021.

The Guidance recommends three fundamental Data Stewardship Values:

  • being respectful
  • being beneficial
  • being fair

It also sets out seven Ethical Principles for AI which are in line with internationally recognised standards, namely:

  • accountability
  • human oversight
  • transparency and interpretability
  • data privacy
  • fairness
  • beneficial AI
  • reliability, robustness, and security

Whilst compliance with the Guidance is not mandatory, any non-compliance can be taken into account by the PCPD when determining whether a data user is in breach of the PDPO.

 

II. IP: Potential infringement of IP by AI systems and ownership of AI-generated IP

Potential infringement of IP by AI systems

IP rights are protected in Hong Kong under the common law and various legislation, such as the Copyright Ordinance (Cap. 528), the Trade Marks Ordinance (Cap. 559), and the Patents Ordinance (Cap. 514).

If a developer uses copyrighted materials to train an AI system, the AI system may create and generate content which infringes the copyright of others. For instance, if a piece of AI-generated artwork is based on a copyrighted artwork, the developer may infringe the copyright of the existing artwork by reproducing the work in a material form electronically: section 23 of the Copyright Ordinance.

IP infringement would give rise to a cause of action in courts. In the event of an IP infringement, the owner may seek remedies such as damages, injunction and surrender of the infringing products. Moreover, making or dealing with infringing articles could amount to a criminal offence. The Customs and Excise Department has power to investigate such copyright infringements and institute prosecutions.

Ownership of AI-generated IP

AI systems can now generate increasingly sophisticated and ever-expanding forms of content - from texts and poetry to codes and designs, images, soundtracks and videos. However, questions arise as to who owns the IP rights to the AI-generated content.

In the UK case of Thaler v. Comptroller-General of Patents [2021] EWCA Civ 1374, the UK Court of Appeal affirmed that an AI system cannot be designated as a patent inventor since it is not a "natural person" and does not possess the necessary legal personhood to be recognised as an inventor. Consequently, the AI system is unable to transfer any patent rights to another individual, such as the developer himself. Similar applications have been filed in a number of other jurisdictions but are largely unsuccessful, though it appears that South Africa’s Companies and Intellectual Property Commission has granted Thaler’s application through its depositary system.

As far as we are aware, there has been no reported case in Hong Kong concerning AI-generated IP to date so the position in Hong Kong is not definitive.

However, section 178 of the Copyright Ordinance prescribes that a work qualifies for copyright protection if the author was at the material time either an individual domiciled or resident or having a right of abode in Hong Kong or elsewhere; or a body incorporated under the law of any country, territory or area. Therefore, it appears that an AI system, being neither an individual nor a legal body, cannot qualify as a copyright owner.

For other IP rights, the decision of Thaler in the English courts would be of persuasive value. Hong Kong courts may follow the UK approach and require a natural person to be an inventor and, by analogy, an owner of other IP rights.

The question of whether a non-human can be an IP owner would seem academic. But as AI becomes more capable and prevalent, stakeholders might need to consider and reimagine how IP laws operate in this new era.

Further, as between users and developers, it would also be a matter of contract who owns the rights to AI-generated content based on the user’s input. Regard should be had to review or put in place appropriate policies and terms of use regarding the ownership and use of such rights.

 

III. Operation and deployment of AI systems and applications: Hong Kong’s position and future directions

The rapid development of AI technology has prompted legislators and regulators in certain jurisdictions to devise and establish regulations for the operation and deployment of AI systems to mitigate the risks brought about by such technology.

For instance, the European Union has proposed an Artificial Intelligence Act which adopts a risk-based approach to regulate AI systems – high-risk applications are banned or subject to specific legal requirements whereas other applications are left largely unregulated.

Separately, China has also recently released its own interim rules on generative AI that will soon come into effect on 15 August 2023. The interim rules apply to generative AI services that are offered to the public in Mainland China. It imposes certain safety assessment and algorithm filing requirements on major or influential generative AI service providers.

In comparison, no regulation or legislation on AI systems has been proposed in Hong Kong yet. However, Hong Kong will no doubt be monitoring overseas developments closely to keep up with the pace of international developments in this nascent and rapidly evolving area.

Professor Sun Dong, Secretary for Innovation, Technology and Industry, commented earlier this year that a special task force will be established to recommend the most effective approach in dealing with the revolutionary impact of ChatGPT, with legislation being one of the possibilities.

The Hong Kong Government recognises the importance of AI technology. It has developed an Ethical Artificial Intelligence Framework (“Framework”) regarding the application of AI and big data analytics in implementing IT projects and services. Although originally designed for internal use, the Government has published an adapted version of the Framework so that organisations may refer to the principles and practices set out in the Framework when implementing IT projects or services.

As countries compete to develop their own homegrown AI industry and champions, it requires a delicate balance between AI’s potential benefits and the unique challenges posed by AI technology to ensure that AI systems are deployed safely and ethically going forward.

 

IV. Potential bias of AI systems : Existing safeguards under Hong Kong laws

AI systems such as large language models use a large set of data to train their underlying algorithms.  As the underlying data may contain bias and prejudices, AI systems can potentially reflect and reinforce human biases and lead to unfair or discriminatory outcomes.

A notable example is a risk assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), using an algorithm to assess a defendant’s likelihood of reoffending, and is intended to assist judges in the US in the sentencing process. However, an investigation conducted by a non-profit organisation suggested that there was a significantly higher number of false positives for recidivism among black offenders compared to white offenders, indicating that COMPAS had internalised some common bias - the idea that black individuals tend to commit more crimes on average than white individuals and are more likely to re-offend in the future.

In Hong Kong, anti-discrimination laws comprising the Sex Discrimination Ordinance (Cap. 480), the Disability Discrimination Ordinance (Cap. 487), the Family Status Discrimination Ordinance (Cap. 527), and the Race Discrimination Ordinance (Cap. 602) prohibit discrimination against a person on the grounds of sex, marital status, pregnancy, disability, family status and race. These laws offer a measure of protection but will not be able to fully address the complexities of AI-generated bias or other grounds of discrimination.  

For example, AI recruitment tools may learn from past data that certain job positions are typically occupied by men, leading them to discriminate against qualified female candidates. This results in the perpetuation of historical biases, even if current anti-discrimination laws are in place.

To minimise AI biases arising in the first place, businesses and organisations may refer to the PCPD’s Data Stewardship Value of ‘being fair’ which applies to both the processes and the results. In terms of the processes, decisions should be made reasonably, without any unjust bias or unlawful discrimination. In terms of the results, people should be treated in a like manner and any differential treatment towards different individuals or groups of people should be based on justifiable grounds. Both AI algorithms and outputs should be evaluated to screen out bias as much as practicable.

 

V. Industry guidance: Industry-specific guidance from the HKMA and the SFC

HKMA

In November 2019, the HKMA published a circular on “High-level Principles on Artificial Intelligence” covering three aspects of AI technologies – governance, application design and development, and ongoing monitoring and maintenance. To avoid hindrance on the development of AI-related technologies, the principles are formulated to be high-level in nature and banks are only expected to apply them in a manner proportional to the nature of their AI applications and the level of associated risks.

In the same month, the HKMA also issued a set of guiding principles on consumer protection aspects in the use of AI applications. The principles are centred around four key areas: governance and accountability, fairness, transparency and disclosure, and data privacy and protection. The HKMA reminds authorized institutions to adopt a risk-based approach commensurate with the risks involved in their use of AI applications when employing these principles.

SFC

The SFC is attuned to the development of AI and employs AI-assisted technology in its operations, but it has yet to issue any specific guidance on AI – partly because there is no pressing need to do so given the highly regulated nature of the sector and numerous existing guidelines on subjects such as cybersecurity and risk management.

Nonetheless, SFC’s CEO, Ms. Julia Leung, commented on AI in her speech at the HKIFA 16th Annual Conference on 5 June 2023, which sheds some light on what could be expected of licensed corporations when using AI applications:

“I believe generative AI can be used responsibly to augment, rather than replace, asset managers in strategic decision making. As a regulator, the SFC is guided by our philosophy to promote the responsible deployment of technology as long as it enhances market efficacy and transparency, cost savings and investor experience. However, at this stage, firms must take its output with a grain of salt, stay alert to AI-related risks and make sure clients are treated fairly. We expect licensed corporations to thoroughly test AI to address any potential issues before deployment, and keep a close watch on the quality of data used by the AI. Firms should also have qualified staff managing their AI tools, as well as proper senior management oversight and a robust governance framework for AI applications. For any conduct breaches, the SFC would look to hold the licensed firm responsible—not the AI.”

Whilst AI tools and applications will become increasingly capable and commonplace, licensed corporations should note that, ultimately, they will be held responsible for their compliance or non-compliance.

Conclusion

The AI regulatory landscape in Hong Kong is still somewhat fragmented. This article offers an overview of the following AI-related issues:

  • Data protection and privacy: Processing of personal data by AI systems and guidelines on the development and use of AI
  • IP: Potential infringement of IP by AI systems and ownership of AI-generated IP
  • Operation and deployment of AI systems and applications: Hong Kong’s position and future directions
  • Potential bias of AI systems: Existing safeguards under Hong Kong laws
  • Industry guidance: Industry-specific guidance from the HKMA and the SFC

As AI technology continues to develop, Hong Kong needs to be vigilant about international legal developments, strive to keep pace with technological advancements and maintain a balance between innovation and protection. Prospectively, as AI technology gradually matures, the Government may consider a dedicated regulatory framework for AI systems to address their evolving risks and challenges.

 

Pan Tsang and Juno Guo

 

For specific advice on technology law and AI-related matters in Hong Kong, please contact:-
Pan Tsang | pan_tsang@robertsonshk.com | +852 2861 8487

 

Disclaimer: This publication is general in nature and is not intended to constitute legal advice. You should seek professional advice before taking any action in relation to the matters dealt with in this publication.

 

Back to all News