In a pivotal ruling in June, Dame Victoria Sharp, President of the UK High Court, cautioned lawyers against using fictitious AI-generated cases in court submissions, warning of potential criminal charges. During assessments of cases involving AI misuse, it was revealed that fake precedents were presented without sufficient verification. In the Ayinde case, a counsel referenced five non-existent cases, while in Al-Haroun, 18 cited authorities were found to be non-existent or misquoted. Sharp emphasized that reliance on generative AI tools like ChatGPT poses risks to public confidence and the justice system. The Bar Council and SRA have issued guidelines stressing lawyers’ obligation for accuracy. As AI’s role in legal practice expands, experts urge enhanced oversight and caution in its application, advocating for lawyers to transparently disclose AI usage in pleadings. This transparency could foster efficient case handling while mitigating the associated risks of AI in legal contexts.
Source link

Share
Read more