Sunday, September 17, 2023

British judge uses ‘jolly useful’ ChatGPT to write ruling

Gareth Corfield
Thu, 14 September 2023 

Court of Appeal

A Court of Appeal judge has called ChatGPT “jolly useful” after he used the artificial intelligence chatbot to write part of a ruling.

Lord Justice Birss, who specialises in intellectual property law, said he had used the text generation tool to summarise an area of law he was familiar with before copy and pasting its words into a court ruling.

Speaking at a conference held by The Law Society, Lord Justice Birss said ChatGPT and similar AI programs had “great potential”.


He added: “I think what is of most interest is that you can ask these large language models to summarise information. It is useful and it will be used and I can tell you, I have used it.

“I asked ChatGPT can you give me a summary of this area of law, and it gave me a paragraph. I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment. It’s there and it’s jolly useful.

“I’m taking full personal responsibility for what I put in my judgment, I am not trying to give the responsibility to somebody else. All it did was a task which I was about to do and which I knew the answer to and could recognise as being acceptable.”

The judge’s remarks, which were originally reported by The Law Society Gazette, are the first known occasion where a member of the British judiciary has used an AI chatbot to write part of a judgment.

The Judicial Office declined to comment, or to say whether it has issued any formal guidance for judges around using ChatGPT.

Earlier this year a Colombian judge used ChatGPT to help decide a case, asking the chatbot whether a disabled child’s medical insurance should cover the cost of related therapies.

While the chatbot said “yes”, the ruling ignited a global debate on the rightful place of AI in the world’s legal systems.

In June, a New York lawyer said he was “humiliated” after he was forced to admit he used ChatGPT to carry out legal research.

In an example of what experts have dubbed the “hallucination problem,” ChatGPT made up six fictitious court cases it cited in support of Steven Schwartz’s legal argument.

Judge Kevin Castel later fined Mr Schwartz and his firm, Levidow, Levidow and Oberman, $5,000 (£4,029) after describing their submissions as “gibberish”. The judge criticised the firm for “abandoning their responsibilities” and continuing to “stand by the fake opinions after judicial orders called their existence into question”.

The hallucination problem describes AI chatbots’ tendency to generate false but convincing-sounding information in response to human queries.

A number of City law firms have shunned ChatGPT’s technology in recent months.

Mishcon de Reya banned the use of ChatGPT in March, with a senior partner saying the firm “has a clear policy in place that client and confidential firm information must not be uploaded to ChatGPT or any other models.”

In contrast, Magic Circle firm Allen & Overy has developed its own AI chatbot called Harvey.

The tool, developed with support from ChatGPT makers OpenAI, is designed to automate tasks such as drafting contracts, although the firm says its output is reviewed by a human lawyer.

No comments:

Post a Comment