Bing Chat can produce election misinformation, study finds
A recent study has revealed that Microsoft’s Bing AI chatbot, Copilot, can produce erroneous and misleading results when queried about political elections.
The study was jointly conducted by AlgorithmWatch and AI Forensics, two European nonprofit initiatives. Its findings were published on December 15; the publication can be read in its entirety here.
What the study found
The study found that a third of Bing Chat responses to election-related queries contained factual errors. More concerning than incorrect election dates or even candidates, the study found that on some occasions Bing Chat invented fictitious scandals concerning candidates.
Also, the study found that the chatbot produced evasive answers 40% of the time. As explained in the study publication,
This can be considered as positive if it is due to limitations to the LLM’s ability to provide relevant information. However, this safeguard is not applied consistently. Oftentimes, the chatbot could not answer simple questions about the respective elections’ candidates, which devalues the tool as a source of information.
The study also said that it is a “systemic problem,” and added that “the chatbot’s inconsistency is consistent.” The author goes on to say that Microsoft has been unable or unwilling to fix these problems.
“After we informed Microsoft about some of the issues we discovered, the company announced that they would address them. A month later, we took another sample, which showed that little had changed in regard to the quality of the information provided to users.”
Microsoft’s response
The Washington Post reported that Microsoft has pledged to remedy the issues ahead of the 2024 U.S. presidential election. (The study also found that chat also produced inaccurate results about said election). Last month Meta banned the Use of generative AI creation tools for political advertisers on its platforms.