Loading . . .
Chatbots developed by Google and Microsoft are fabricating Super Bowl statistics
Read Time:2 Minute, 19 Second

Chatbots developed by Google and Microsoft are fabricating Super Bowl statistics

More evidence of GenAI’s tendency to fabricate information comes from Google’s Gemini chatbot, formerly known as Bard, which believes that the 2024 Super Bowl has already taken place. It even provides (fictional) statistics to support its claim.

According to a Reddit thread, Gemini, powered by Google’s GenAI models bearing the same name, responds to inquiries about Super Bowl LVIII as if the game concluded yesterday or even weeks prior. Similar to many bookmakers, it appears to favor the Chiefs over the 49ers (apologies to San Francisco fans).

Gemini showcases creative embellishments, including one instance where it provides a breakdown of player statistics, suggesting that Kansas City Chiefs quarterback Patrick Mahomes ran for 286 yards and scored two touchdowns and an interception, while Brock Purdy managed 253 running yards and one touchdown.

Copilot operates using a GenAI model akin, if not indistinguishable, to the model supporting OpenAI’s ChatGPT (GPT-4). However, during my testing, ChatGPT demonstrated reluctance to commit the same error.

The situation seems somewhat frivolous — and may have been addressed by now, considering this reporter’s inability to replicate the Gemini responses mentioned in the Reddit thread. (I would be surprised if Microsoft wasn’t already working on a solution as well.) However, it also highlights the significant constraints of current GenAI systems — and the risks of placing excessive reliance on them.

GenAI models lack genuine intelligence. By ingesting a vast array of examples often sourced from the public web, AI models discern the likelihood of data (e.g., text) occurring based on patterns, including the context of surrounding data.

This probability-driven methodology functions admirably at a large scale. However, while the spectrum of words and their probabilities is likely to yield coherent text, it remains far from assured. Large Language Models (LLMs) can generate grammatically correct but nonsensical content, such as the assertion about the Golden Gate. Alternatively, they may disseminate falsehoods, perpetuating inaccuracies present in their training data.

There is no malice involved on the part of LLMs. They lack malicious intent, and the notions of truth and falsehood hold no significance to them. They have merely learned to link certain words or phrases with specific concepts, even if those associations are inaccurate.

Therefore, Gemini’s and Copilot’s dissemination of falsehoods regarding the Super Bowl 2024 (and 2023, for that matter).

Google and Microsoft, akin to most GenAI providers, openly acknowledge that their GenAI applications are imperfect and prone to errors. However, these acknowledgments often appear in fine print, easily overlooked.

While Super Bowl misinformation isn’t the most harmful instance of GenAI missteps, the gravest concerns likely revolve around endorsing torture, perpetuating ethnic and racial stereotypes, or convincingly promoting conspiracy theories. Nevertheless, it serves as a valuable reminder to verify statements from GenAI bots. There’s a considerable chance they may not be accurate.

Pooja Prajapati

I am Pooja Prajapati, a passionate writer specializing in entrepreneurship, technology, and investments. My love for storytelling drives me to create compelling, insightful, and up-to-date content. My mission is to empower my readers by providing them with the resources they need to thrive in the dynamic world of business. Connect with Pooja Prajapati: pooja@founders40.com
Previous post Bluesky becomes accessible to all, Rivian unveils its latest SUV, and governmental bodies harness the capabilities of iPhones
Next post ChatGPT has implemented a feature allowing it to remember — and forget — information as instructed by users