A recent study conducted by European nonprofits AI Forensics and AlgorithmWatch has raised concerns about Microsoft's Bing AI chatbot, now rebranded as Copilot, providing inaccurate and misleading information about election-related queries. The research, released on December 15, revealed that the chatbot gave incorrect answers 30% of the time regarding political elections in Germany and Switzerland. Questions about candidate information, polls, scandals, and voting elicited inaccurate responses. The study also reported discrepancies in the chatbot's answers to queries about the 2024 U.S. presidential elections. While the misinformation doesn't seem to have directly impacted election outcomes, it raises concerns about potential public confusion and misinformation in the future. The study highlights that these issues are not exclusive to Bing, and in preliminary tests, similar discrepancies were found in ChatGPT-4.
The nonprofits chose Bing's AI chatbot for the study as it was one of the first to incorporate sources in its responses. However, the findings suggest a need for improvement in the accuracy of information provided by generative AI, which is becoming increasingly widespread. The study noted that safeguards within the AI chatbot were "unevenly" distributed, leading to evasive answers 40% of the time.
In response to the study, Microsoft acknowledged the issues and expressed its commitment to addressing them before the U.S. presidential elections in 2024. A Microsoft spokesperson advised users to independently verify information obtained from AI chatbots to ensure accuracy.
This incident underscores the growing challenges associated with AI-generated content and the need for continuous improvements in algorithms to provide reliable and transparent information. As generative AI becomes more prevalent, the study emphasizes the potential impact on democracy and the importance of access to accurate and transparent public information.
(SAVANNAH FORTIS, COINTELEGRAPH, 2023)