News

‘Hopeless’ to potentially handy: law firm tests chatbots


Artificial intelligence (AI) tools have got significantly better at answering legal questions but still can not replicate the competence of even a junior lawyer, new research suggests.

The major British law firm, Linklaters, put chatbots to the test by setting them 50 “relatively hard” questions about English law.

It concluded OpenAI’s GPT 2, released in 2019, was “hopeless” but its o1 model, which came out in December 2024, did considerably better.

Linklaters said it showed the tools were “getting to the stage where they could be useful” for real world legal work – but only with expert human supervision.

Law – like many other professions – is wrestling with what impact the rapid recent advances in AI will have, and whether it should be regarded as a threat or opportunity.

The international law firm Hill Dickinson recently blocked general access to several AI tools after it found a “significant increase in usage” by its staff.

There is also a fierce international debate about how risky AI is and how tightly regulated it needs to be.

Last week, the US and UK refused to sign an international agreement on AI, with US Vice President JD Vance criticising European countries for prioritising safety over innovation.

This was the second time Linklaters had run its LinksAI benchmark tests, with the original exercise taking place in October 2023.

In the first run, OpenAI’s GPT 2, 3 and 4 were tested alongside Google’s Bard.

The exam has now been expanded to include o1, from OpenAI, and Google’s Gemini 2.0, which was also released at the end of 2024.

It did not involve DeepSeek’s R1 – the apparently low cost Chinese model which astonished the world last month – or any other non-US AI tool.

The test involved posing the type of questions which would require advice from a “competent mid-level lawyer” with two years’ experience.

The newer models showed a “significant improvement” on their predecessors, Linklaters said, but still performed below the level of a qualified lawyer.

Even the most advanced tools made mistakes, left out important information and invented citations – albeit less than earlier models.

The tools are “starting to perform at a level where they could assist in legal research” Linklaters said, giving the examples of providing first drafts or checking answers.

However, it said there were “dangers” in using them if lawyers “don’t already have a good idea of the answer”.

It added that despite the “incredible” progress made in recent years there remained questions about whether that would be replicated in future, or if there were “inherent limitations” in what AI tools could do.

In any case, it said, client relations would always be a key part of what lawyers did, so even future advances in AI tools would not necessarily bring to an end what it called the “fleshy bits in the delivery of legal services”.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.