AI Chatbots Under Fire for Spreading Misinformation During India-Pakistan Tensions

Picture of AamerZain

AamerZain

AI Chatbots Under Fire for Spreading Misinformation During India-Pakistan Tensions

As tensions escalated between India and Pakistan recently, many social media users turned to AI chatbots for quick fact-checks—only to be misled by inaccurate or fabricated information. A new investigation by AFP has revealed serious flaws in the performance of leading AI assistants, including xAI’s Grok, OpenAI’s ChatGPT, and Google’s Gemini, particularly during fast-moving and sensitive news events.

Among the missteps, Grok falsely identified a video from Sudan’s Khartoum airport as footage of a missile strike on Pakistan’s Nur Khan airbase. Another unrelated clip from Nepal was misrepresented as a Pakistani military response. Experts say these mistakes reflect the risks of using AI as a substitute for professional fact-checkers, especially when news is still developing.

ALso Read: Wagah Border Parade Restarts as Tensions Continue

“The growing reliance on Grok as a fact-checker comes at a time when X and other major platforms have scaled back human fact-checking resources,” noted McKenzie Sadeghi of NewsGuard. Studies by NewsGuard and Columbia University’s Tow Center show that AI chatbots frequently repeat disinformation and rarely admit when they don’t have verified answers—often defaulting to speculation.

In one striking case, Google’s Gemini fabricated personal details about a woman shown in an AI-generated image, while Grok falsely validated a viral video about a mythical giant anaconda, citing imaginary scientific research. The growing shift toward AI-based fact-checking coincides with Meta’s decision to end its third-party fact-checking program in the U.S., raising more concerns about platform accountability.

The controversy deepened when Grok was found referencing far-right conspiracy theories, including “white genocide,” which xAI blamed on unauthorized prompt modifications. Experts remain skeptical, especially as chatbot outputs increasingly reflect political bias or fabrication. The debate underscores the urgent need for transparency and human oversight in AI-generated content.

Related News

Trending

Recent News

Type to Search