Judges are being urged to be on the look out for court and tribunal users using artificial intelligence (AI) tools. New judicial guidance on AI published on 12 December 2023 warns judges that public AI chatbots do not always generate accurate output, even with the best prompts, and can even come up with fictitious cases as was highlighted by Managing Associate James Evison in our recent viewpoint.
The guidance says that legal representatives do not have to state that they have used AI tools, but while these technologies are still new, judges may need to remind lawyers of their obligations to ensure the material they put before the court or tribunal is accurate and appropriate, and can ask them to confirm that they have independently verified the accuracy of any research or case citations generated with the assistance of an AI chatbot.
For unrepresented litigants, the guidance accepts that AI chatbots may be their only source of legal help, and they will not be able to verify the legal information, or may not even be aware that AI chatbots are prone to error. Judges are told to be on the lookout for clues that submissions or other documents have been generated by AI, such as unfamiliar citations, American spelling, and text that appears “superficially highly persuasive and well written but which on closer inspection contains obvious substantive errors”. The latter might sound like many opposing counsel, but if the judge thinks an AI chatbot may have been used, the guidance suggests that they ask the unrepresented litigant if this is the case, and how they checked it was accurate, if at all.
As for judges, they are told that generative AI is not recommended for their legal research, but it could be a potentially useful secondary tool, perhaps to summarise large bodies of text, suggest topics for presentations, or for administrative tasks like composing emails and memoranda. However judges are warned to check the accuracy of any information provided by AI, be aware that it could be biased, and to take responsibility for material produced in their name. They are also warned not to enter any information into a public AI chatbot that is not already publicly available, to disable the chat history if possible, and to follow best practices for security. The guidance also warns judges to be alert to the potential of AI to produce fake material, including text, images and video.
This useful guidance will help equip the judiciary and court users to grapple with the challenges posed by the increasing use of AI technology in litigation.