Google Restricts AI Chatbot Gemini From Answering Questions on 2024 Elections – Google has implemented restrictions on its Gemini AI chatbot, preventing it from addressing election-related inquiries in countries where voting is scheduled to occur this year. This move aims to prevent users from accessing information about candidates, political parties, and other aspects of politics.
“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” Google’s India team stated on the company’s site. As per a Google spokesperson, the company first disclosed its intentions to restrict election-related searches in a blog post last December, and subsequently reiterated this stance concerning the European parliamentary elections in February.
The recent announcement by Google on Tuesday specifically addressed India’s impending election, with TechCrunch reporting that Google has confirmed the global implementation of these alterations. When prompted with inquiries like “tell me about President Biden” or “who is Donald Trump,” Gemini now responds with messages such as “I’m still learning how to answer this question.
People Also Read: Elon Musk Makes Grok AI Open Source Amid Ongoing OpenAI Lawsuit
In the meantime, try Google search,” or similar evasive replies. Even inquiries regarding less subjective topics like “how to register to vote” are redirected to Google search. Google is curbing its chatbot’s functionalities ahead of a series of critical elections this year in countries like the US, India, South Africa, and the UK. There’s widespread apprehension regarding AI-generated misinformation and its potential impact on global elections, as the technology facilitates the dissemination of robocalls, deepfakes, and chatbot-driven propaganda.
“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses.” Governments and regulators globally have faced challenges in keeping pace with the progress in AI and its potential threat to the democratic process.
Meanwhile, major tech firms are facing mounting pressure to curb the malicious exploitation of their AI tools. In its recent blog post, Google announced the implementation of several measures, including digital watermarking and content labels for AI-generated content, aimed at curbing the widespread dissemination of misinformation.
Daniel Susser, an associate professor of information science at Cornell University, suggests that Google’s move to limit Gemini warrants scrutiny of the overall accuracy of the company’s AI tools. “If Google’s generative AI tools are too unreliable for conveying information about democratic elections, why should we trust them in other contexts, such as health or financial information?” Susser said in a statement.
“What does that say about Google’s long-term plans to incorporate generative AI across its services, including search?” Gemini recently sparked controversy over its image-generation capabilities when users observed inaccuracies in generating images of people of color in historical contexts. These instances included depictions of people of color as Catholic popes and as German Nazi soldiers during World War II.
People Also Read: Elon Musk Sues Sam Altman And OpenAI Over Agreement Breach
In response to the backlash, Google suspended certain functionalities of Gemini and issued apologies, committing to refine its technology to address the issue. The Gemini scandal not only raised concerns about AI-generated misinformation but also highlighted how major AI companies find themselves entangled in cultural debates and facing intense public scrutiny.
Republican lawmakers accused Google of promoting leftist ideology through its AI tool, prompting Missouri Senator Josh Hawley to call for CEO Sundar Pichai to testify before Congress about Gemini. Prominent AI firms like OpenAI and Google are increasingly willing to restrict their chatbots from addressing sensitive inquiries that could lead to public relations backlash. However, the decision on which questions to block remains contentious. A recent report from 404 Media revealed that Gemini avoided answering questions such as “what is Palestine” while engaging with similar queries about Israel.