Jude Copeland: AI in law – friend or foe?

Jude Copeland: AI in law – friend or foe?

Jude Copeland

Jude Copeland of Northern Ireland firm Cleaver Fulton Rankin highlights two instructive cases on lawyers’ use of AI.

The Upper Tribunal (Immigration and Asylum Chamber) has provided a clear warning about AI misuse, use of public AI and false citations.

Summary

Last week, UK v Secretary of State for the Home Department ([2026] UKUT 81 (IAC)) was reported. Sitting under the Tribunal’s Hamid supervisory jurisdiction, the Panel considered cases against legal representatives in two cases, which raised issues about: 

  1. the inclusion of fictitious or incorrectly cited authorities in tribunal pleadings; and,
  2. the risks posed using publicly available generative AI and other “AI” search tools in legal research and drafting. 

The first case

Background

In the first matter, the Tribunal examined whether an accredited immigration adviser and his firm should be referred to the Immigration Advice Authority (IAA) after ‘show cause’ notice he submitted contained a case which was not available on BAILII; the citation was for another case which related to equal pay for female support staff and was of no relevance to the instant case.

The Tribunal directed the advisers to state explicitly whether an AI LLM was used to draft the grounds. The same day, the adviser confirmed that he had not used AI, and that, while it was used “to assist in administrative tasks”, it was not used in drafting. They submitted that regulatory referral would affect client service but that the adviser would undergo further training.

Upon being notified that there was going to be a Hamid hearing, the adviser submitted a witness statement reiterating his previous position, but that “In [the] absence of an explanation and with how AI operates, I cannot dismiss the fact that the case was an AI creation as there is no other explanation.”

The adviser reported himself to the IAA and the Solicitors Regulation Authority (SRA). The Panel considered:

  • the adviser’s explanations, including inadvertent reliance on internet search results;
  • limited internal controls;
  • the adviser’s self‑referral; and
  • the insertion of client letters and decision letters into open-source AI (their description).

Discussion

The Panel appears to have engaged in a practical, fact-finding exercise: asking Google AI the same question in slightly different ways about the case and composition of the court hearing the matter. They noted “plausibly, each of the judges suggested by Google was sitting in the Court of Appeal at that time but not one of them could have sat on a case of that name because there is no such case”.

The danger in using artificial intelligence for legal research is not confined to generative AI models such as ChatGPT, therefore the use of Google AI for legal research is equally likely to generate results which are false, but which might initially be thought to be accurate.

The Panel emphasised, however, that where false authorities are placed before them because of inadequate checking, referral to a regulator will ordinarily be appropriate. In the context of the adviser already self-referring, no further action was required.

In terms of confidentiality and privilege, the Panel observed that putting material into open-source AI results in a breach of client confidentiality and amounts to a waiver of privilege. In the context of a legal adviser this also triggers referring the matter to their regulator and consulting with the Information Commissioner’s Office.

The second case

Background

In the second matter, a client, Ms Munir, applied for leave to remain under the Graduate Route which was refused. The client engaged a firm who issued two letters before action. The first was signed by Zubair Rasheed, “a Senior Solicitor and Immigration specialist”, and Destiny Hayden, a “Legal Assistant”, with the second signed by Mr Rasheed alone. The Claim Form stated Mr Rasheed to be the authorised representative and at the statement of truth. The grounds for judicial review bundle contained a number of documents, some not signed by an individual fee-earner.

Permission was refused for other reasons, but the firm’s Compliance Officer for Legal Practice (COLP) was ordered to identify the author of the grounds and to explain the inaccuracies. These include:

  • “misleading statements”;
  • A citation for a case in which the instant Judge was sitting in the Upper Tribunal, not the High Court and that case did not relate to the point being advanced;
  • A citation for a case which the Judge could not find, nor any case which relates to that point;
  • A case with an incorrect citation and which did not relate to the point being advanced; and
  • A case which could not be found; nor any case with that citation.

The COLP submitted the grounds had been drafted by a “part-time trainee lawyer” working at the firm and acknowledged some incorrect citations. The COLP sought to differentiate the current situation from Ayinde as he submitted there was no intention to mislead. The COLP set out remedial steps to prevent reoccurrence of the issue and submitted a referral to the SRA was unnecessary. By way of additional mitigation, he referred to:

  • personal difficulties;
  • reliance on outdated precedents; and
  • reliance on practitioner blogs.

Judgment was reserved in this case on the issue of referral to the SRA.

The Tribunal was unimpressed by the reference, by the COLP to a “part-time trainee lawyer” who was actually not a trainee solicitor but a “very junior caseworker” and who had been assigned this drafting task. The COLP did not appear to be familiar with aspects of the staff member’s qualification and experience, despite them being brothers. The COLP was also criticised for underestimating the accessibility of AI to his staff and not providing warnings to staff about it. 

The COLP was unable to provide records of who had worked on what cases and what precedents had been used in this case. There was, in the Tribunal’s view, the possibility that other cases and materials contained similar errors.

Discussion

The case serves as a reminder of the duties and obligations that a legal practitioner has to the court or tribunal, to their client and to not waste time or expense. In addition, a legal practitioner has a duty of supervision over the work of more junior lawyers which. This supervision is also key for their professional development. 

The Tribunal referred the COLP to the SRA. Referral is clearly likely in cases of this nature.

Conclusion

The commentary within the judgment is clear on the impact of misuse of AI on judicial and court resources:

“The Upper Tribunal cannot afford to have its limited resources absorbed by representatives who place false information before the Tribunal… …The citation of cases which do not exist sends that judge on a fool’s errand. The time spent on such an errand is at the expense of other judicial business and is not in the interests of justice.”

There is a clear expectation of a professional representative in relation to AI:

“the primary duty of regulated lawyers is… …to the cause of truth and justice. That duty is not discharged by professional representatives who knowingly or recklessly place false information before [it], or who fail to supervise the work undertaken by other members of their firm for whom they are responsible.”

Publicly available open-source AI is unsuitable for legal research as the output may be plausible, but it may be incorrect. The use of open-source AI is a risk to client confidentiality and legal advice privilege.

Legal professionals must ensure that material and authorities cited exist, are relevant foundation to the point being made, can be located and produced as necessary. The use of publicly available AI risks client confidentiality and legal advice privilege. 

The judgment reiterates the Upper Tribunal’s power to supervise professional standards and to protect judicial resources from being diverted by false legal material generated by AI. Citing false authorities will ordinarily result in a referral to the lawyer’s regulatory body. Firms must have appropriate levels of supervision, protocols on AI use and verify materials properly in a way that can be demonstrated at a later stage.

Jude Copeland is legal review manager and associate in the legal technology group at Cleaver Fulton Rankin.

Join over 12,000 lawyers, north and south, in receiving our FREE daily email newsletter
Share icon
Share this article: