The launch of ChatGPT in November 2022,1 a chatbot developed by OpenAI and built on top of the GPT-3 family of large language models (LLMs), has sparked interest in this cutting-edge technology across all sectors. As more people recognise its power due to its general purpose (not designed for a specific task or context) and its human-like engagement in conversational dialogue, it is achieving unprecedented traction.

Prior to ChatGPT, there were already a few successful applications designed for specific tasks, based on the underlying models like GPT family, catering to specific needs in different domains. For instance, Jasper's text-generated products2 have boosted marketers' content creation.

Despite controversies on whether generic AI will be adopted in a wider dimension and dominate the market share of custom-trained applications, we believe there will be a boom in the creation and deployment of AI tools that take advantage of the advancement of LLMs. Some currently encountered use cases include chatbots, search engines, summarisation, content creation (e.g., code, emails and documents) and language translation3 (Intended Scope).

While many complex legal issues arise on the development, commercialisation, and deployment of AI technologies, we intend to address certain key issues in our article series. This first article provides a high-level roadmap and explores the legal aspects that in-house counsel should consider when the enterprise is procuring LLM-based AI tools from a third-party service provider (Service Provider) for the Intended Scope to empower employees and improve the efficiency of current workflows within a company.

Under the PRC Legal Framework

China has a unique regulatory approach to developing and distributing AI technologies, which can essentially be broken down into software and data from a lawyer’s perspective. Any market player rolling out AI technologies within China is expected and required to comply with the current legal framework.

This framework includes general legislation governing cyberspace (including, inter alia, the Cybersecurity Law,4 the Data Security Law,5 the Personal Information Protection Law,6 and the Administrative Measures for Internet Information Services7) and specific regulations targeting the AI area (including, among others, Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services8 and Provisions on the Administration of Deep Synthesis of Internet-based Information Services9).

As the current AI technologies are not explicitly covered by China’s accession to the WTO,10 we anticipate regulatory challenges for AI services that are administered and operated offshore (such as ChatGPT11) but delivered to the Chinese market on a cross-border basis. Also, legal issues on cross-border data transfers (i.e., outbound transfer in the foregoing case) will further complicate the feasibility of a cross-border delivery model. In terms of localised service delivering, we have seen that local tech giants and start-ups are racing to launch their own AI-powered products.12

Qualification of the Service Provider

When an enterprise conducts due diligence on a Service Provider, one key question is whether the Service Provider is legally permitted to provide the requested AI tool. Depending on the nature and characteristics of the service offering, the Service Provider may be subject to a corresponding licensing regime.

If the service offering includes the functionalities captured by the Telecom Business Classification Catalogue,13 such as information search and real-time interaction, the enterprise may need to obtain a value-added telecom service licence issued by the competent administration of communications covering information services. The Provisions on the Administration of Deep Synthesis of Internet-based Information Services specifically requires that online publishing services, online cultural activities, and online audio-visual program services comply with the respective regulatory requirements.

In-house counsel should also consider whether the Service Provider has implemented comprehensive and robust security systems to safeguard the infrastructure and the models/data running on them. Such an assessment may cover compliance with the regulatory requirements under the Cybersecurity Law (e.g., implementing a network security-graded protection system), applicable national standards14 and sectoral standards15/best practice.

Contracting Issues

Many enterprises are increasingly using cloud-based software (SaaS), and the relevant legal issues may not be unfamiliar to in-house counsel. It is anticipated that the method of delivering AI tools will largely remain the same. Specifically, B2B AI tools will be provided to the enterprise on a subscription basis in the cloud. In certain circumstances, an on-premises licence may still be a feasible option for enterprises with special needs. In-house counsel would need to review thoroughly the licensing terms to ensure that the rights and obligations between the provider/licensor and customer/licensee are reasonably allocated.

On the other hand, B2C AI services will likely remain accessible to individual subscribers through click-accept licensing terms.

Data protection

The issues on cross-border data transfers are out of scope for the purposes of this article (as the data is likely to need to be localised under applicable Chinese laws). In-house counsel should, however, be mindful of the potential data issues involved in the entire workflow.

If the data fed to the AI tool (as prompts) contain personal data, then it is necessary to consider whether a robust data policy is in place to collect, transfer, process and store such data. It is critical to clarify whether such information will be utilised by the Service Providers to pre-train or fine-tune the underlying AI models. If so, either explicit consent should be obtained or data masking (anonymisation) undertaken.

Additionally, both the business and legal teams should carefully consider whether information relating to the enterprise's business (which may qualify as trade secrets) may be fed to the AI tool and how to address the risks (if any, e.g. loss of trade secrets) to the business.

The legal framework governing data protection and usage is constantly evolving. Accordingly, it is important to keep track of developments in the regulatory landscape.16

Cybersecurity

It is important to be aware of the risks of a malware attack and data breach when using in-cloud AI services. In addition to assessing the reliability of the Service Provider as indicated above, other common approaches to mitigate these risks include strengthening the access security (e.g., by implementing strong passwords and enabling multi-factor authentication), monitoring suspicious account activities and regularly updating software and systems (with upgraded security patches to address known vulnerabilities).

Intellectual Property

One hotly debated issue on generative AI technology is whether copyright can subsist in content generated by, or with the assistance of, the AI tool. This is especially relevant for enterprises engaged in content creation who may be struggling with copyright issues, such as marketing companies.

The question of whether copyright can subsist and will be recognised under PRC law, and who will own it, may well depend on the facts. The copyright implications of content generated by an AI tool with little or no human intervention are different from those where the AI tool simply makes cosmetic changes to the user's writing.

Another important factor to consider is the risk of copyright or trade mark infringement. As AI models are typically trained using data scraped from the internet, and are opaque to users, it can be difficult to rule out the risk that content generated by AI could potentially infringe existing copyright or trade marks of a third party.17

Similarly, there is a risk that a user's input (which might contain the user’s own intellectual property rights, such as copyright) may be misused by other users of the same AI tools, or by tool developers, when they use the same underlying AI models via API.18

Bias and Discrimination

Bias and discrimination are common ethical concerns around AI technologies.19 For instance, if the enterprise uses an AI tool to screen employee candidates, the tool may make decisions that exclude certain demographics based on its learning from historical data that might itself be biased or discriminatory.20 Employment discrimination is prohibited by the Employment Promotion Law,21 so this may result in legal liability for the enterprise.

The so-called “black box” problem in machine learning makes it difficult for a human to understand and monitor the decision-making process in the algorithm.22 Researchers are developing explainable AI techniques to increase the transparency of AI systems in order to enable humans to understand how a particular decision was arrived at.23 Additionally, constraints can be put in place by the Service Providers and/or the enterprise user to make generative use cases safer, which generally include system designs that provide for the following:24

  • Keeping a human in the loop.
  • End user access restrictions.
  • Post-processing of outputs.
  • Content filtration.
  • Input/output length limitations.
  • Active monitoring.
  • Topicality limitations.

A practical way for enterprises with current technical constraints to mitigate risks in this regard is to ensure that the Service Provider has ethical guidelines and principles for the development and deployment of AI systems in place.25 Such guidelines should be consistent with the moral values upheld by the enterprise.

Unfair Competition

The power of AI models may be used by certain business operators to undermine competitors' protected interests and to gain unfair advantages. Therefore we expect that the framework of competition and anti-trust laws will also be highly relevant to the development and deployment of AI technologies.

On the one hand, Service Providers themselves may be subject to regulatory scrutiny, depending on the circumstances. For example, a picture-creation business operator scrapes picture data from a competitor's data pool to fine-tune its own model, resulting in a reduction of the competitor’s market share. Such data scraping may be deemed to be a violation of the Anti-unfair Competition Law.26

On the other hand, depending on the role of the AI tools in the workflow, enterprise users of the relevant services may face challenges from regulators or competitors regarding the nature of the work product generated by the AI tool, which is in turn incorporated into their products and services for downstream customers.

Again, the black box problem may pose challenges in monitoring the generated content to ensure that it complies with regulatory parameters (e.g., not creating or disseminating false or misleading information about competitors).27 This would also complicate the issue of determining the "malicious intent" of the AI tool user concerned, which is a constituent element for certain unfair practices.28

Johnny Liu has also contributed to the article.

1. See https://zh.wikipedia.org/zh-hk/ChatGPT.
2. See https://www.jasper.ai/blog/gpt-3-content-generator.
3. See https://blogs.nvidia.com/blog/2023/01/26/what-are-large-language-models-used-for/ & https://www.analyticsinsight.net/top-10-applications-for-large-language-models-in-2023/.
4. In Chinese《网络安全法》, effective from 1 June 2017.
5. In Chinese 《数据安全法》, effective from 1 September 2021.
6. In Chinese《个人信息保护法》, effective from 1 November 2021.
7. In Chinese 《互联网信息服务管理办法》, effective from 8 January 2011.
8. In Chinese《互联网信息服务算法推荐管理规定》, effective from 1 March 2022.
9. In Chinese《互联网信息服务深度合成管理规定》, effective from 10 January 2023.
10. See http://www.gov.cn/gongbao/content/2017/content_5168131.htm.
11. OpenAI has prohibited access by users located in certain jurisdictions including China and Hong Kong S.A.R. See https://finance.sina.com.cn/chanjing/gsnews/2023-02-17/doc-imyfywyt6841773.shtml.
12. https://www.forbeschina.com/business/63448.
13. In Chinese《电信业务分类目录(2015年版)》, effective from 1 March 2016 and amended on 6 June 2019.
14. For example, see 国家标准 - 全国标准信息公共服务平台 (samr.gov.cn).
15. For example, see 行业标准信息服务平台 (sacinfo.org.cn).
16. Kindly note that it is anticipated that a classified and hierarchal ownership affirmation and authorization system for public data, corporate data and personal data will be established, in accordance with the Opinions of the CPC Central Committee and the State Council on Establishing a Data Base System to Maximize a Better Role of Data Elements (In Chinese《中共中央、国务院关于构建数据基础制度更好发挥数据要素作用的意见》,effective from 2 December 2022).
17. For example, see https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/.
18. An application programming interface (API) is a way for two or more computer programs to communicate with each other. See https://en.wikipedia.org/wiki/API.
19. See https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/.
20. For example, see https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazonscraps-secret-ai-recruiting-tool-that-showed-bias-against-womenidUSKCN1MK08G.
21. In Chinese 《就业促进法》,effective from 1 January 2008 and amended on 24 April 2015.
22. See https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/8.
23. See https://www.researchgate.net/publication/361983895_The_Black_Box_Problem.
24. See https://openai.com/blog/openai-api.
25. Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services and Provisions on the Administration of Deep Synthesis of Internet-based Information Services both require the AI service provider to establish the system of scientific and technological ethics review.
26. In fact, the draft amended Anti-Unfair Competition Law circulated in late 2022 for public comment (see https://www.samr.gov.cn/hd/zjdc/202211/t20221121_351812.html explicitly provides that business operators are prohibited from engaging in unfair competition activities using their data, algorithm, technology and capital advantages as well as platform rules.
27. Article 11 of the Anti-Unfair Competition Law (in Chinese《反不正当竞争法》, effective from 23 April 2019) provides that a business shall not fabricate or disseminate false or misleading information to damage the goodwill or product reputation of a competitor.
28. Article 12 of the Anti-Unfair Competition Law provides that no business may, by technical means to affect users' option or otherwise, commit the following acts of interfering with or sabotaging the normal operation of online products or services legally provided by another business: causing incompatibility with an online product or service legally provided by another business with malicious intent.

Contact

Partner, Corporate and M&A