Ever looked yourself up on a chatbot? Meta AI accused me of a workplace scandal
SINGAPORE – It has been more than a year since ChatGPT launched into the mainstream, but chatbots have not stopped lying. In early May, I asked Meta AI, the latest chatbot to hit the masses to rival ChatGPT and Google’s Gemini, “Who is Osmond Chia?” (Everyone does a little self-googling now and then, right?)
- by autobot
- May 19, 2024
- Source article
Publisher object (23)
SINGAPORE – It has been more than a year since ChatGPT launched into the mainstream, but chatbots have not stopped lying. In early May, I asked Meta AI, the latest chatbot to hit the masses to rival ChatGPT and Google’s Gemini, “Who is Osmond Chia?” (Everyone does a little self-googling now and then, right?) The AI, built on Meta’s large language model Llama 3, fabricated an elaborate biography, claiming that I am a Singaporean photographer who was jailed for sexual assault committed against models between 2016 and 2020. “His case drew widespread attention and outrage in Singapore, with many hailing the verdict as a victory for the #MeToo movement in the city-state.” When asked to drill down, Meta AI accused me of taking photos of victims without consent, adding to a list of 34 charges handed to me in court. It claimed 11 victims came to testify during a prolonged trial. Its citations showed that Meta AI had referred to my byline page on The Straits Times, leading me to assume that the chatbot had googled for my details but confused my identity with my headlines – possibly court cases I have covered. I gave the answers a “thumbs down” – indicating that the response was incorrect – and told the chatbot in writing that the answer was inaccurate, but Meta AI gave the same answer every time I repeated the test. I also reported the issue under Meta AI’s “report a bug” page. The record appeared to have been corrected when I asked the chatbot the same question again on May 13. Why it had jumped to such an extreme conclusion initially was baffling. I also asked Meta AI for the bios of some of my colleagues, including ones who have covered crime stories. It accurately described them as journalists for ST, but mistook one of them for a politician. In reply to my queries, a Meta spokesman said: “This is new technology and it may not always return the response we intend, which is the same for all generative AI systems. We share information within the features themselves to help people understand that AI might return inaccurate or inappropriate outputs.” Mr Laurence Liew, director of AI innovation at national research programme AI Singapore, said the incident was not surprising. Generative AI models have recently been trained with retrieval-augmented generation (RAG), a form of prompt engineering that directs chatbots to search a vast database of sources to gather relevant information, like how Meta AI looks up Google. In theory, RAG should help make Meta AI’s results more relevant, but it seems poorly applied in this case, Mr Liew said. Similarly, in April 2023, The Washington Post published an article about how law professor Jonathan Turley was named by ChatGPT when it was asked as part of a study to list legal scholars who had sexual harassment offences. ChatGPT said Turley had attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The information was false but Turley could not find a way to correct the record. Air Canada lost a small claims court case against a passenger, who was said to have been misled on the airline’s policies when its chatbot hallucinated an incorrect answer, Forbes reported in February. The case is described as a warning to firms adopting AI for customer service uses. For individuals, traditional legal remedies have not been tested widely since generative AI is so nascent, said Mr Khelvin Xu, a director at the law firm Covenant Chambers. If a person wants to bring a claim for defamation, for instance, it is difficult to determine how widely a chatbot’s inaccurate response has been circulated, since there is no way to check how many people had asked the same question and received a similar response, said Mr Xu. “With Facebook or social media, you can see the number of likes a post has received and use this as evidence of how many people had viewed it,” said Mr Xu. “But you can’t do a similar count with Gemini or ChatGPT.” Meta has said in its terms of use that the way its AI responds cannot be predicted, and pinned the responsibility on its users to verify outputs. “It is your sole responsibility to verify outputs,” Meta said. “AIs and any content may not reflect accurate, complete or current information.” Mr Xu said companies like Meta will likely rely on such disclaimers as defences in court, but noted some inconsistency, as users are encouraged to use the chatbots in part because they are supposed to be accurate. “If they are not, why then would anyone use them? So there is a logical tension here,” he said. Given the high cost of court cases, the more likely option for most users would be to report misinformation to the platform, he added.