Why our colleagues are proud to work for us
Our 30 second video gives an overview of our score for pride at work in the Sunday Times Best Place to Work survey.
Artificial Intelligence (AI) has transformed the way we live and work, with many industries embracing the technology to improve efficiencies, productivity, and customer experience. One AI system that has recently gained a lot of mainstream attention is ChatGPT, an AI sourced text generator which could be applied in a number of professional contexts such as customer support, content creation, education and research.
However, despite its potentially transformative capabilities, the use of ChatGPT by businesses poses a number of complex legal and practical risks that should be considered in respect of all and any AI that you introduce within your organisation. We consider a number of those risks in this article:
Accuracy of Output
Generative AI such as ChatGPT provides information based on the data that it is trained on and the questions that it is asked. While the AI is designed to provide accurate information, it is possible for errors or inaccuracies to occur. Indeed, ChatGPT has been in the news recently for inventing facts with total confidence – one UK broadsheet newspaper even reports that it is receiving requests for archived material which it cannot supply because the articles cited by ChatGPT do not exist!
Ownership of Output
Even if the copyright in AI generated content is typically owned by the human that uses the AI to create that content, ownership is further complicated by the fact that the AI’s output may not be original or unique, as the same response may be given to multiple users. Furthermore, depending on the nature of the AI, there is no guarantee that the AI’s output will not infringe on existing copyright protected works. In the case of ChatGPT (which draws on a huge amount of data in order to create its output) there is no easy way for users to tell whether ChatGPT’s responses have been pulled directly from an existing copyrighted work and are therefore infringing a third party’s intellectual property.
There are also no guarantees that the data that generative AI takes from the internet to generate its output will not include the personal data of individuals. There is therefore a danger that personal data may be inadvertently processed or included in the output, which could be a breach of data protection laws. In particular, there is already talk at EU level about action being required to prevent generative AI potentially infringing EU data privacy laws, with the Italian data protection authority taking the initiative and recently putting a temporary ban on ChatGPT because of this possibility.
Similarly, there are also data protection implications if AI is used in other contexts, for example if AI captures information about employees (e.g., through a camera, microphone, or sensor) this is likely to be personal data. Businesses will therefore need to ensure that any personal data captured by AI in the workplace is processed in a way which accords with relevant privacy rules and their own policies that tell employees how the employer will deal with their personal data.
AI in Products
Although not specific to ChatGPT, if an AI system is integrated into a physical product such as a piece of machinery on a production line, then a key issue is who is liable if something went wrong with the machinery. Traditionally, when something went wrong in these circumstances, the problem could generally be traced to either a defect in the machine itself (so liability might lie with the machine supplier) or to the machine being incorrectly operated (so the liability might lie with the employer of the operator). However, in an environment where AI is incorporated into a machine, in addition to the above, faults might also stem from the AI software which forms part of the machine, or the telecommunications which allow the machines to communicate with one another or to the internet.
Liability for AI could be attributed to a particular entity via contractual agreements between the relevant parties. This means that reviewing the contracts under which AI products are procured is more important than ever to ensure that businesses can seek to apportion liability when it arises and recover at least some of the losses or costs which they may incur as a result of any failures from the relevant supplier. Businesses will also need to make sure that they have insurance available to cover relevant additional risks.
As this article makes clear, the potential liabilities associated with using AI will depend on the specific context in which it is used. Organisations that deploy AI should seek legal advice to understand these potential liabilities and to mitigate the potential risks.
For advice and assistance in the implementation and use of AI in your business or on AI generally, please contact either James Sarjantson on 0113 201 0401 - ku.oc1701695528.fcl@1701695528nostn1701695528ajras1701695528j1701695528 or Thomas Taylor on 0113 204 0407 - ku.oc1701695528.fcl@1701695528rolya1701695528tt1701695528
Contact our offices
Make an enquiry