Media Centre

Home / Media Centre / Blogs / Will AI replace professional advice

Will AI replace professional advice

LCF Law | Corporate & Commercial | Meta

The rapid rise of ChatGPT shows no sign of abating. However the increased implementation of ChatGPT by professional advisers is raising concerns about the accuracy of the information provided by the tool and is creating an ethical dilemma around the responsibility and accountability of professionals when using it.

Recently, a New York lawyer relied on ChatGPT to prepare a court filing for his client. The only problem was that several of the cases he supplied to the court as evidence of precedent “appear to be bogus judicial decisions with bogus quotes and bogus internal citations” said Judge Castel of the Southern District of New York.

Steven A. Schwartz, a lawyer who has been licenced in New York for over 30 years, then confessed in an affidavit that he’d used ChatGPT to produce the cases in support of his client’s case and was “unaware of the possibility that its content could be false”. Screenshots attached to a further filing appear to show Schwarz asking ChatGPT whether a case it provided was real and what its sources were. The artificial intelligence tool falsely responded that the case was real and could be found on legal reference databases such as LexisNexis and Westlaw.

As a result, Schwartz now awaits the results of a sanctions hearing, which may involve him paying legal fees that the other side incurred while uncovering the false information that he filed.

Days after Schwartz was threatened with sanctions, a federal judge in Texas instigated a requirement for lawyers in cases before him to attest that they did not use artificial intelligence to write their legal briefs, due to AI tools’ propensity to invent facts. In what appears to be a first, US Judge Brantley Starr is specifically requiring that lawyers file a certificate to indicate that their filings were not generated in whole or in part by an AI tool, or alternatively that a human has checked any AI-generated text.

Judge Starr has laid out the requirement on his court’s website stating that while AI such as ChatGPT can be “incredibly powerful,” they are also “prone to hallucinations and bias. On hallucinations, they make stuff up – even quotes and citations.” Judge Starr also stated that a further danger of an over-reliance on AI is that lawyers swear an oath to uphold the law and represent their clients, whereas AI platforms are not bound by such considerations. "Unbound by any sense of duty, honor [sic], or justice, such programs act according to computer code rather than conviction, based on programming rather than principle", the Judge said.

The lessons learned from these cases appear to be that, while AI solutions such as ChatGPT have enormous potential to transform the way professional advice (whether legal or otherwise) is delivered, there are currently a number of clear risks and limitations when using AI in this context.

Accuracy. Tools such as ChatGPT will inevitably make the process of research in many areas much quicker (and potentially therefore cheaper for the end customer). However, professional advisors ultimately owe a duty of competence to their clients. Therefore, if an AI solution makes an error or provides inaccurate information, it is the advisor and not the AI that will be responsible for that error. For example, in the case of ChatGPT, its terms of use specifically state that OpenAI (the company that developed ChatGPT) are not liable for any damages arising from the use of the platform, which is provided “as is”, without any warranty that the output is accurate or capable of being used for a particular purpose.

Professional advisors must therefore recognise the current limitations in AI technology and must not rely on them exclusively. There is still no substitute (yet) for checking work. It is common practice for professional advisors to supervise the staff who support them on tasks such as research. As things stand, they will also need to extend this supervision to include the work of AI solutions.

AI technology will no doubt improve further, but ultimately users will want to rely on its output, and the key question is: who is willing to stand behind that output? OpenAI are not prepared to (perhaps understandably). If professional advisers are prepared to, they will need to consider what safeguards to put in place to ensure the accuracy of that output (and not doubt their professional indemnity insurers will want to be part of this process).

Scope of Research. AI solutions are only as effective as the data they are trained on. If they are not trained on a comprehensive range of data, they may not be able to provide a complete picture of a specific issue. For example, the version of ChatGPT that is currently freely available to use online is not connected to the internet and has a limited knowledge of any developments that took place after 2021, however those willing to pay for the premium ChatGPT Plus plan are now able to install a web-browsing plugin which allows ChatGPT to draw data from the internet to answer the various questions posed to it.

Professional advisors therefore need to be mindful that there are issues with the scope of research that an AI solution can provide, regardless of whether it is connected to the internet or not. On the one hand, if the AI solution is not connected to the internet, then it will have been trained on a data set which may not be complete or up to date. However on the other hand, if the tool is connected to the internet, then the information that the AI solution receives will not be a curated data set and so the accuracy of the information will be dependent on the quality of, and any bias in, the search engine that the AI solution uses – for example Google’s algorithm prioritises modern web technologies such as encryption and smart phone enabling, and therefore many websites with high quality content that do not feature modern web technologies will not necessarily be used by the AI solution when generating output.

Ownership of Output: As current AI solutions cannot properly list and credit materials reproduced in their output, there is a risk that any output they provide may violate a third party’s intellectual property rights, in particular third-party copyright. This makes it unwise for professional advisors to replicate directly the output provided into documents which are then published by them (such as court documents in our example above).

Confidentiality: Finally, professional advisors also owe their clients a duty of confidentiality, and therefore in using an AI solution, they risk giving the AI companies their clients’ data to train and improve their models, potentially violating confidentiality laws. This is the case with ChatGPT which, according to an FAQ article on the OpenAI website, uses any data inputted into its system to help build, train itself and improve its models. While the article goes on to state that ChatGPT takes steps to reduce the amount of personal information in their datasets, professionals can and should go to ChatGPT settings (under Data Control) before using the platform and opt-out of their data being used to train ChatGPT’s models.

Developments in AI are progressing at a rapid pace, however in the rush to adopt AI solutions it is important that professional advisers take the time to consider how best to use AI solutions in order to improve and supplement (rather than replace entirely) their services, while at the same time considering the potential ramifications that may arise if these solutions are not used appropriately.

What can we do to help?

For advice and assistance in the implementation and use of AI in your business or on AI generally, please contact either James Sarjantson on 0113 201 0401 - ku.oc1714399014.fcl@1714399014nostn1714399014ajras1714399014j1714399014 or Thomas Taylor on 0113 204 0407 - ku.oc1714399014.fcl@1714399014rolya1714399014tt1714399014

Get in touch