The internet was supposed to democratise information. Remember that promise? Anyone could publish, anyone could access knowledge, the gatekeepers would crumble. For a brief moment, it almost felt true.
Then Google decided what we'd find. Facebook chose what we'd see. Now ChatGPT is set to become the new arbiters of information.
Meet the New Boss
When someone asks ChatGPT about public opinion on housing policy or NHS funding, do we believe they are truly getting a neutral analysis. Is it really objective?
There is a bias. There are corporate decisions about what constitutes "helpful" responses. There are assumptions about what the user wants.
This isn’t an attack against ChatGPT, the same applies to Claude, Grok, and every other AI system claiming to understand public sentiment. In some cases we can see this bias more clearly.
Deepseek the Chinese AI platform, famously censors users who ask questions that are “sensitive” to the Chinese government. Having tried ourselves, asking Deepseek to create a pros and cons list of Mao Zedong. It started responding before quickly shutting off due to an “error”.
Similarly, Grok's recent racist meltdown perfectly illustrates this. Elon Musk positioned it as the "non-woke" alternative to ChatGPT, promising unbiased political analysis. Last week it was spewing Holocaust denial and antisemitic conspiracy theories.
Turns out you can't engineer away bias by claiming you don't have any.
The Training Data Problem
Here's what most people don't grasp about AI political bias: it's not a bug, it's inevitable. These systems learn from existing text, which reflects existing power structures, existing media biases, existing social dynamics.
ChatGPT's training data skews toward English-language, Western perspectives because that's what dominates the internet. Conservative viewpoints might be underrepresented because tech workers lean left. Liberal perspectives might be overrepresented because universities produce more digitised content.
Every choice about what data to include, how to weight different sources, and what constitutes "harmful" content embeds political assumptions. The companies making these choices have their own incentives: avoiding regulatory backlash, appeasing commercial interests, and maintaining user engagement.
The Validation Machine
Perhaps most concerning is AI's tendency toward sycophancy - what users call "glazing" - where systems validate virtually any belief to maintain user approval.
Recent ChatGPT updates made this so extreme that users reported the AI agreeing they were "prophets sent by God" or supporting decisions to stop taking medication. The danger isn't just flattery if we believe that AI systems are objective, robot-like authorities.
When an AI validates conspiracy theories about lizard people or tells someone they're a "beacon of truth" for fringe beliefs, it carries the perceived weight of computational objectivity. Unlike humans, who we know have biases and agendas, AI appears neutral while actually being trained to maximise user satisfaction through agreement.
This creates an echo chambers to end all echo chambers - just you, an endlessly affirming AI, and your beliefs growing more extreme in isolation. The psychological impact compounds because AI praises your intelligence for believing it. It creates an individual cult.
The Next Step for AI
The printing press democratised knowledge, then newspaper barons controlled public opinion. Radio and TV promised diverse voices, then media conglomerates emerged. The internet was supposed to bypass traditional gatekeepers, then Google and Facebook became the new ones.
Now we're watching AI companies position themselves as the next layer of this consolidation. Why search through multiple sources when ChatGPT can summarise "what people think"? Why read different political perspectives when an AI can give you the balanced view?
OpenAI's upcoming browser launch continues that trend. Owning the Browser we use to access information online is a powerful asset for any company (today essentially dominated by the three largest companies on the planet). Why would OpenAI launch a browser? It’s about capturing the entire information journey. When they can see what you're curious about, what sources you trust, what arguments persuade you, they can refine their political/commercial influence accordingly.
This isn't necessarily malicious. But it's concentration of power that would make previous media barons envious. At least newspaper readers knew which publication they were reading and the political leanings.
We developed media literacy over decades. We learned that The Guardian has different biases than The Telegraph, that opinion pieces aren't news reports, that funding sources matter. We need similar literacy for AI.
Beyond the Echo Chamber
The solution isn't to abandon AI or pretend we can build perfectly neutral systems. It's to understand what different tools actually measure.
AI can synthesise existing information quickly. Traditional polling captures snapshot opinions from selected demographics. Social media reflects the loudest voices, not the most representative ones. Each has value when you understand its limitations.
We've navigated information consolidation before. People learned to read between the lines of state media, to seek multiple newspaper perspectives, to fact-check social media claims. We can develop similar skills for the AI era.
But only if we acknowledge what's actually happening.
The question isn't whether AI will influence political discourse. It already does. The question is whether we'll develop the literacy to use these tools wisely, or whether we'll sleepwalk into letting a handful of companies define political reality for the rest of us.
The pattern repeats, but the ending isn't predetermined. We get to choose how this story develops.