The United Kingdom issued guidance on the use of artificial intelligence that takes society’s reliance on AI to a whole new dystopian level.
The British Courts and Tribunals Judiciary announced on Dec. 12 that court officials are now allowed to use AI in legal rulings despite AI companies’ admissions that their chatbots are flawed and should not be relied upon by users. Strikingly, the Judiciary went on to say that it hopes the legal field uses advancements in AI technology in the future.
“The use of Artificial Intelligence (‘AI’) throughout society continues to increase, and so does its relevance to the court and tribunal system,” the Judiciary said in an official statement. “The guidance is the first step in a proposed suite of future work to support the judiciary in their interactions with AI.”
While the Judiciary acknowledged that AI responses “may be inaccurate, incomplete, misleading, or biased,” the guidance manual posted on the judiciary’s website appeared to encourage legal professionals to use the tools as long as they independently verify the information put before the court.
“All legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate,” read the AI Guidance document. “… Until the legal profession becomes familiar with these new technologies, however, it may be necessary at times to remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot.”
Of concern, many AI tech firms have repeatedly warned that their products are not reliable or trustworthy.
Google's Bard disclaimer contains the following warning about the technology's trustworthiness. "Generative AI and all of its possibilities are exciting, but it’s still new,” Google warns. “Bard is an experiment, and it will make mistakes. Even though it’s getting better every day, Bard can provide inaccurate information, or it can even make offensive statements."
ChatGPT has a similar disclaimer warning that answers provided in the chatbot could be just plain wrong "ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers,” declared OpenAI, the parent company of ChatGPT. “It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content."
MRC Free Speech America Assistant Editor Luis Corneli contributed to this report.o
Conservatives are under attack. Contact your representatives and demand that government agencies and Big Tech be held to account to uphold the First Amendment. If you have been censored, contact us at the Media Research Center contact form, and help us hold Big Tech accountable.