Brazilian officials confirmed they are looking into the first known case of an error in judgment caused by artificial intelligence
A Brazilian federal judge in the northern state of Acre has been ordered to explain how he came to publish an error-riddled decision co-authored by AI chatbot ChatGPT in a first-of-its-kind case for the country, authorities confirmed to AFP on Monday.
The National Justice Council (CNJ) has given Judge Jefferson Rodrigues 15 days to explain a decision bristling with incorrect details about previous court cases and legal precedent, including the erroneous attribution of past decisions to the Superior Court of Justice, case records revealed.
Rodrigues admitted in documents filed with the supervisory body that the decision was co-written with a “trusted advisor” – and AI. He brushed off the foul-up as “a mere mistake” made by one of his underlings, blaming “the work overload facing judges” for the errors.
The CNJ claimed the incident was “the first case of its kind” in Brazil, which has no laws prohibiting the use of AI in judicial settings. Indeed, the Supreme Court’s president reportedly plans to commission the creation of a “legal ChatGPT” for use by judges – a project that is said to be already underway in the state of Sao Paulo.
Judges have been using AI chatbots to inform their decisions for almost as long as they have been available to the public, despite their tendency to produce extremely vivid, authoritative-sounding “hallucinations” – responses with no basis in reality.
Colombian Judge Juan Manuel Padilla Garcia of the First Circuit Court in Cartagena proudly credited ChatGPT in a decision he issued in January regarding whether an autistic child should receive insurance coverage for medical treatment, qualifying the unusual research method with a reassurance that its responses were fact-checked and were “in no way [meant] to replace the judge’s decision.”
In June, US federal judge P. Kevin Castel fined two lawyers with the firm Levidow, Levidow & Oberman PC $5,000 after they submitted bogus legal research – including several nonexistent cases – generated by ChatGPT to back an aviation injury claim, then doubled down on the phony citations when questioned by the judge.
You can share this story on social media: