Biased and Bogus Sources Are Powering AI
I was promised Rosie from The Jetsons, but all I got was this half-baked AI overview
This morning I asked Google, “Can I trust Google Gemini to provide reliable information?”
Atop the list of search results sat an AI overview that Google’s own large-language model — or LLM — had generated. “While Google Gemini is a powerful tool for brainstorming, drafting, and summarizing,” it read, “it should not be blindly trusted for accurate, factual information. It is prone to ‘hallucinations’ (confident, incorrect answers), can produce outdated information, and may display bias.”
Outdated information. Bias. Hallucinations! Oh my!
As major tech companies, from Google to Meta, shove generative AI tools into every nook and cranny of their existing products, these models are increasingly shaping how people encounter information — and misinformation. This is particularly important as we enter yet another election season where malignant actors from inside and outside the United States will seek to weaponize media and technology to spread false and misleading reports, stoke hate and division — and undermine democracy.
Google, the world’s largest search engine, now pushes AI overviews in approximately 60 percent of its results. More and more, people are seeing AI-based sources like ChatGPT, Claude, Gemini, Grok and Llama when seeking news and information. These sources train on LLMs that dictate what people see, and the order in which they see it.
According to a new report commissioned by PSG Consulting, AI Large Language Model Training: The Potential Risks of Ideological Skewing, these LLMs are disproportionately trained on biased, low-fact and right-leaning sources.

Garbage in, garbage out
Here’s where I confess that I skim AI summaries when making low-stakes decisions about which fiction series to read next. But I’ve noticed a worrisome trend when I’ve researched more serious topics, like trying to understand my friend’s cancer diagnosis. Not only did Gemini feed me a series of bullet points with conflicting assertions of fact, the sources for the top answers weren’t leading cancer research institutes, but junk websites I’d never heard of.
The PSG Consulting report helps explain why. This analysis studied more than two-dozen web crawlers, the software that AI companies use to scan the open internet and pull in data to train their LLMs. Individual websites can block or partially block these crawlers — or allow them to crawl freely. The researchers looked at crawler-blocking patterns across more than 150 websites, ranging from conservative to nonpartisan to left-leaning. The findings are sobering:
- “Very low” factual sites are roughly 90 percent accessible, while “high-factual” outlets are only 50 percent accessible to AI crawlers.
- High-factual, center-left publishers impose the most restrictions on AI crawlers, while low-factual, far-right publishers impose few.
- Far-right sites are nearly 80 percent accessible to AI crawlers, while center-left outlets are less than 40 percent accessible.
- Across the seven highest-impact AI crawlers, median center-left outlets block 100 percent of them, while median conservative or far-right outlets block none.
The study identified more than a dozen types of biases in LLM training data, including over-representation of sites that permit crawler access, English-language dominance, Western-canon bias, underrepresentation of scholarship from the global majority, overrepresentation of well-resourced publishers, temporal bias, and more.
We must reckon with these biases. Since we know there is an English-language bias, we should consult with non-English-speaking engineers and sociologists to solve that problem. Because we know the AI industry is extremely homogenous — overwhelmingly Western, male and white — we must take extra care to listen to women of color who are experts in the field.
People are predisposed to give chatbots too much credit; that’s been true since the very first chatbot was invented. This reaction so disturbed the MIT professor who created that basic chatbot that he wrote a whole book about it. Yet a 2025 Stanford University study found that AI rivals humans in political persuasion. As we approach election season, we must prepare for how malignant actors will exploit AI for political manipulation, and be aware that the problem is likely far worse in other languages because of the overrepresentation of English in data sets and the human annotators who train and test AI learning models.
Not what The Jetsons promised
In The Jetsons, Rosie the robot cleaned the house and sprinkled in some motherly advice with a side of sass. I prefer that fiction to our current timeline, as AI robots are making a mess for civil society to clean up.
With trillions of dollars in AI investment expected this year, the burden of ensuring that AI does no harm should fall on the wealthy executives who stand to profit from its proliferation, not on civil society. Instead, parents, teachers, librarians, nonprofit organizations, election officials, journalists and more are left to scramble together civics lessons for the AI era and to mount corporate and public-policy campaigns against well-heeled AI giants.
Tech barons are dumping hundreds of millions of dollars to influence this year’s elections and the candidates seeking public office. It’s no wonder that conservative and liberal elected officials from California to D.C., are either fiercely pro-AI or mealy-mouthed about its negative impacts.
This show is a rerun
We’ve seen this show before. Social-media magnates — many of whom are now embedded in the AI industry — have always prioritized profit above all else when getting people hooked on social media. They design their algorithms to amplify bigotry and to microtarget false and misleading information. These problems are even more pronounced in Spanish and other non-English languages.
That’s why I organized corporate accountability campaigns like Change the Terms, Stop Hate for Profit, Stop Toxic Twitter and Ya Basta Facebook. Civil society spent years trying to alleviate the harms, to pressure social-media companies to stop hurting us, to pass laws to protect civil and human rights in the digital age. We raised public awareness to blunt the effects of disinformation, but structural solutions to remedy the harm weren’t implemented.
We’re at a critical juncture with some of the same players who spent endless piles of cash to drown us out while they maximized their social-media profits, no matter the consequences. They will pursue the almighty dollar no matter the human cost, and their claims to the contrary are designed to placate and silence us. Venture capitalist and early Facebook investor Roger McNamee has confirmed as much, calling Silicon Valley business models “sociopathic.”
Too rich to fail?
In a just democracy, we’d have laws and regulations to balance the negative impacts of AI with the potential for innovation in the public interest. We’d have full transparency into how LLMs operate, to help inform regular people and policymakers about the efficacy, risks and benefits of these models. We’d commit to learning how to harness new technologies to make life better for people and to protect the environment, human and civil rights, and access to accurate information. We’d be willing to say no to the proliferation of risky and harmful technologies.
Instead, our president is for sale to the highest bidder. Trump himself manipulates our information environments, gleefully spreading lies and hate. He allowed Elon Musk nearly unfettered access to our personal data to feed his Grok LLM, and he rolled out the red carpet for any AI companies willing to tow his line. Meanwhile, those offering even the tiniest shred of resistance need not apply.

Congress has been no better. Over the objections of the Congressional AI Caucus Democrats, it has tried to preempt states from regulating AI in numerous must-pass bills, though they haven’t succeeded yet thanks to a broad public-interest coalition banding together.
A whole new ‘infostructure’
The PSG consulting report offers suggestions and poses important questions about how to increase transparency and ensure that people have access to credible, factual information.
I’d go a step further and suggest that we must enact structural policy solutions to build greater information infrastructure. Infostructure, if you will.
We need to build a media-and-tech ecosystem that can support a just and multiracial democracy and withstand authoritarian power grabs.
The first step includes directing more resources to a robust and diverse set of trusted sources like community organizations, public and ethnic media, and community newsrooms that help people understand what’s happening. With the commercial model for journalism failing us in so many ways, we need to build a whole new system. We must invest in publishers that provide accurate news and information, not internet brainrot clickbait.We cannot expect to receive trustworthy information and investigative journalism for free. Accurate information is a public good, like schools, libraries, parks and beaches. We must treat it as such.
And in light of the rapid monetization and weaponization of our private data, we must pass laws to protect privacy and civil rights in the digital age. We shouldn’t have to sell our privacy in exchange for access to information or unbiased LLMs. We should break up with the social-media model where ads, ideas and information are targeted at us based on our online behavior and demographic data.

Finally, media literacy and civics need a major investment. We need to teach students early and often about the evolving communications ecosystem. We should invest in teaching students how to engage in democracy, and how to distinguish between low-fact and high-fact data. And we must model the golden rule of media literacy: Consider the source.
We cannot keep letting tech bros frame the debate and call the policy shots. We need deep examination of the environmental, safety, and human and civil rights risks of AI. And Big AI should pay for it. Not us.
About the author
Jessica J. González is an attorney and co-CEO of Free Press and Free Press Action, where she leads efforts to transform the media system so it can support a just and multiracial democracy. Follow her on Bluesky.
Teamwork
Compiled by Pressing Issues editors
It’s Jessica J. González takeover week at Pressing Issues! But if you missed Tuesday’s dispatch on war and media consolidation, Jessica made an Instagram reel breaking it all down.

Jessica also joined Free Press’ Ruth Livier as guests on WHMP radio’s Talk the Talk in Northampton, Massachusetts, where they discussed media companies abandoning DEI and capitulating in the second Trump administration. In Livier’s COMPLICIT report, she found that under Trump all but one of the country’s top 35 media companies have altered or erased their DEI policies.
The kicker
“This deal endangers our democracy by giving a family of pliant billionaires even more control of vast swaths of our news coverage, TV stations, and movie studios.” —Craig Aaron, discussing the Paramount Skydance-Warner Bros. Discovery merger in The Ringer’s story “The Terrifying Tentacles of Paramount’s Media Empire.”

