An immigration barrister was discovered by a decide to be utilizing AI to do his work for a tribunal listening to after citing circumstances that have been “completely fictitious” or “wholly irrelevant”.
Chowdhury Rahman was found utilizing ChatGPT-like software program to organize his authorized analysis, a tribunal heard. Rahman was discovered not solely to have used AI to organize his work, however “failed thereafter to undertake any correct checks on the accuracy”.
The higher tribunal decide Mark Blundell stated Rahman had even tried to cover the actual fact he had used AI and “wasted” the tribunal’s time. Blundell stated he was contemplating reporting Rahman to the Bar Requirements Board. The Guardian has contacted Rahman’s agency for remark.
The matter got here to mild within the case of two Honduran sisters who claimed asylum on the premise that they have been being focused by a prison gang of their house nation. Rahman represented the sisters, aged 29 and 35. The case escalated to the higher tribunal.
Blundell rejected Rahman’s arguments, including that “nothing stated by Mr Rahman orally or in writing establishes an error of regulation on the a part of the decide and the enchantment should be dismissed”.
Then, in a uncommon ruling, Blundell went on to say in a postscript that there have been “important issues” inside the grounds of enchantment put earlier than him.
He stated that 12 authorities have been cited within the paperwork by Rahman, however when he got here to learn the grounds, he observed that “a few of these authorities didn’t exist and that others didn’t assist the propositions of regulation for which they have been cited within the grounds”.
In his judgment, he listed 10 of those circumstances and set out “what was stated by Mr Rahman about these precise or fictitious circumstances”.
Blundell stated: “Mr Rahman appeared to know nothing about any of the authorities he had cited within the grounds of enchantment he had supposedly settled in July this yr. He had apparently not meant to take me to any of these choices in his submissions.
“A number of the choices didn’t exist. Not one choice supported the proposition of regulation set out within the grounds.”
Blundell stated the submissions made by Rahman – who stated he had used “numerous web sites” to conduct his analysis – have been due to this fact deceptive.
Blundell stated: “The obvious clarification is … that the grounds of enchantment have been drafted in complete or partially by generative synthetic intelligence similar to ChatGPT.
“I’m certain to watch that one of many circumstances cited in Mr Rahman’s grounds … has lately been wrongly deployed by ChatGPT in assist of comparable arguments.”
Rahman instructed the decide that the inaccuracies within the grounds have been “because of his drafting type” and he accepted there may need been some “confusion and vagueness” in his submissions.
Blundell stated: “The issues which I’ve detailed above are usually not issues of drafting type. The authorities which have been cited within the grounds both didn’t exist or didn’t assist the grounds of which have been superior.”
He added: “It’s overwhelmingly seemingly, in my judgment, that Mr Rahman used generative synthetic intelligence to formulate the grounds of enchantment on this case, and that he tried to cover that reality from me throughout the listening to.
“Even when Mr Rahman thought, for no matter purpose, that these circumstances did by some means assist the arguments he wished to make, he can not clarify the completely fictitious citations.
“In my judgment, the one real looking risk is that Mr Rahman relied considerably on Gen AI to formulate the grounds and sought to disguise that reality when the difficulties have been explored with him on the listening to.”
The decide’s ruling was made in September and revealed on Tuesday.

