Media Centre

Home / Media Centre / Blogs / ChatGPT and AI – What are the risks?

ChatGPT and AI – What are the risks?

LCF Law | Corporate & Commercial | Meta

Artificial Intelligence (AI) has transformed the way we live and work, with many industries embracing the technology to improve efficiencies, productivity, and customer experience. One AI system that has recently gained a lot of mainstream attention is ChatGPT, an AI sourced text generator which could be applied in a number of professional contexts such as customer support, content creation, education and research.

However, despite its potentially transformative capabilities, the use of ChatGPT by businesses poses a number of complex legal and practical risks that should be considered in respect of all and any AI that you introduce within your organisation. We consider a number of those risks in this article:

Accuracy of Output
Generative AI such as ChatGPT provides information based on the data that it is trained on and the questions that it is asked. While the AI is designed to provide accurate information, it is possible for errors or inaccuracies to occur. Indeed, ChatGPT has been in the news recently for inventing facts with total confidence – one UK broadsheet newspaper even reports that it is receiving requests for archived material which it cannot supply because the articles cited by ChatGPT do not exist!

Businesses that use information obtained from AI such as ChatGPT therefore need to be aware that there may be no recourse against the system developer if the information that the AI provides turns out to be inaccurate. For example, in the case of ChatGPT, its Terms of Use specifically state that OpenAI (the company that developed ChatGPT) are not liable for any damages arising from the use of the platform, which is provided “as is”, without any warranty that the output is accurate or capable of being used for a particular purpose.

Ownership of Output
The ownership of intellectual property (IP) in content generated by AI is an evolving area of law and organisations that deploy AI should seek legal advice to understand their IP rights and responsibilities related to content generated by that technology. In the case of generative AI such as ChatGPT, which requires an individual to input data (or direct that process) in order to generate content, it is likely that the ownership of any IP (most likely copyright) that results from that content would belong to the individual who generated it, rather than the AI system developer or the AI system itself (which in any case is not a recognised legal entity capable of owning IP rights or being recognised as an author). This is reflected in ChatGPT’s Terms of Use which state that the IP rights, title and interest in any content provided by ChatGPT are owned by the individual user who accesses the service. However, copyright protection requires work to be original, meaning that the author has created the work through their own skill, judgement and effort. The position is therefore more much more difficult where generative AI is advanced enough to make its own decisions (such that the individual user is not directing the process at all).

Even if the copyright in AI generated content is typically owned by the human that uses the AI to create that content, ownership is further complicated by the fact that the AI’s output may not be original or unique, as the same response may be given to multiple users. Furthermore, depending on the nature of the AI, there is no guarantee that the AI’s output will not infringe on existing copyright protected works. In the case of ChatGPT (which draws on a huge amount of data in order to create its output) there is no easy way for users to tell whether ChatGPT’s responses have been pulled directly from an existing copyrighted work and are therefore infringing a third party’s intellectual property.

Privacy violations
There are also no guarantees that the data that generative AI takes from the internet to generate its output will not include the personal data of individuals. There is therefore a danger that personal data may be inadvertently processed or included in the output, which could be a breach of data protection laws. In particular, there is already talk at EU level about action being required to prevent generative AI potentially infringing EU data privacy laws, with the Italian data protection authority taking the initiative and recently putting a temporary ban on ChatGPT because of this possibility.

Similarly, there are also data protection implications if AI is used in other contexts, for example if AI captures information about employees (e.g., through a camera, microphone, or sensor) this is likely to be personal data. Businesses will therefore need to ensure that any personal data captured by AI in the workplace is processed in a way which accords with relevant privacy rules and their own policies that tell employees how the employer will deal with their personal data.

AI in Products
Although not specific to ChatGPT, if an AI system is integrated into a physical product such as a piece of machinery on a production line, then a key issue is who is liable if something went wrong with the machinery. Traditionally, when something went wrong in these circumstances, the problem could generally be traced to either a defect in the machine itself (so liability might lie with the machine supplier) or to the machine being incorrectly operated (so the liability might lie with the employer of the operator). However, in an environment where AI is incorporated into a machine, in addition to the above, faults might also stem from the AI software which forms part of the machine, or the telecommunications which allow the machines to communicate with one another or to the internet.

Liability for AI could be attributed to a particular entity via contractual agreements between the relevant parties. This means that reviewing the contracts under which AI products are procured is more important than ever to ensure that businesses can seek to apportion liability when it arises and recover at least some of the losses or costs which they may incur as a result of any failures from the relevant supplier. Businesses will also need to make sure that they have insurance available to cover relevant additional risks.

As this article makes clear, the potential liabilities associated with using AI will depend on the specific context in which it is used. Organisations that deploy AI should seek legal advice to understand these potential liabilities and to mitigate the potential risks.

What can we do to help?

For advice and assistance in the implementation and use of AI in your business or on AI generally, please contact either James Sarjantson on 0113 201 0401 - ku.oc1713104611.fcl@1713104611nostn1713104611ajras1713104611j1713104611 or Thomas Taylor on 0113 204 0407 - ku.oc1713104611.fcl@1713104611rolya1713104611tt1713104611

Get in touch