

This article is the first part of a 5 part series talking about the nature and affects of AI to our future by Inka Vuorinen of Momentum Design.
While Part 1 explored the quiet, sometimes invisible presence of AI in our lives, Part 2 asks a darker question: what happens when the very data that feeds these systems becomes contaminated? What happens when the invisible hand guiding decisions is subtly nudged toward deception or harm?
The phrase “garbage in, garbage out” is often repeated in AI circles, especially in the critical ones, but its accuracy has grown in the age of sophisticated models.
When that information is flawed, biased, or deliberately manipulated, the results can ripple through entire systems, silently shaping outcomes.
A very resent study by, a team from Texas A&M University, the University of Texas at Austin, and Purdue University states:
“The results provide significant, multiperspective evidence that data quality is a causal driver of LLM capability decay, reframing curation for continual pretraining as a training-time safety problem and motivating routine “cognitive health checks” for deployed LLMs.”
The study also found that the effects of Brain Rot were deeply internalized and persistent, and therefor very difficult to mitigate.
Hackers don’t always attack systems directly. Increasingly, they manipulate the data that trains AI. By injecting false or misleading examples, they can make AI misclassify threats, hallucinate solutions, or even create software that appears safe but hides malware. One corrupted dataset can influence countless decisions, often without anyone noticing until it’s too late.
AI is not magic. It is a reflection of the information it consumes.

It isn’t always obvious.
AI doesn’t have intentions of its own — it reflects ours. Another recent study done in a collaboration between Anthropic’s Alignment Science team, the UK AISI's Safeguards team, and The Alan Turing Institute is the largest poisoning investigation to date. It reveals a surprising finding about the amount of poisoned documents needed to affect an AI model:
“In our experimental setup with simple backdoors designed to trigger low-stakes behaviors, poisoning attacks require a near-constant number of documents regardless of model and training data size.”
Malicious actors can exploit AI’s patterns, nudging it to produce results aligned with their aims. Think about an AI model trained to detect fraudulent transactions. If someone subtly poisons the training data, the system may miss real fraud or even approve suspicious activity. The AI doesn’t intend harm; it simply operates on the data it believes to be true.
This kind of manipulation doesn’t require fireworks or dramatic attacks. It is quiet, insidious, and often invisible.
The results are structural, systemic, and cumulative. Over time, small distortions in AI’s “understanding” can cascade into large-scale consequences.
The poisoned well of data raises questions about trust.
How can we rely on systems that may have been subtly influenced, without our awareness?
Social media has shown us a human-scale version of this problem: coordinated campaigns, bot armies, and fake engagement can make false narratives seem real.
AI is vulnerable to similar distortions, but with far greater reach and speed. We often trust AI because it seems precise, logical, and detached. But the detachment is an illusion. Every dataset, every label, every annotation carries the fingerprints of human choices and vulnerabilities.
When these fingerprints are intentionally or unintentionally skewed, the AI we rely on may quietly guide decisions in harmful directions.
One of the most unsettling phenomena in modern AI is hallucination: when a model confidently presents information that is entirely false. Hallucinations aren’t bugs they are emergent behaviors caused by flawed data, ambiguous context, or manipulative inputs.
Hallucinations are a reminder that AI is far from flawless; it is deeply dependent on the quality of what it consumes, and even small errors can echo far beyond their origin.
Even in the face of autonomous intelligence, human oversight remains critical.
AI doesn’t “know” ethics or intention, it learns patterns and relationships. The responsibility of vigilance, auditing, and understanding falls to us. Detecting poisoned data, mitigating biases, and questioning outputs is a human task, demanding curiosity, diligence, and critical thinking.
The poisoned well is both a warning and an invitation. It reminds us that technology is powerful, but only as responsible as the humans who cultivate it.
Ignorance is not bliss here; it is a huge risk.

Choosing awareness is the first defense.
By understanding the risks of data poisoning, by critically engaging with the AI systems we rely on, we reclaim some measure of agency. We can design monitoring, validation, and feedback loops that prevent manipulation from spreading unseen.
And yet, the challenge is broader than any technical fix. It is cultural, ethical, and collective. It asks us to consider: how much do we trust automated systems, and what are we willing to do to safeguard the integrity of the digital ecosystems we live in?
The well may be fairly easily poisoned, but the story isn’t over. AI’s power to learn, predict, and assist is still immense. But the clarity of it’s vision for the future depends on us. Our willingness to notice distortions, question inputs, and act responsibly.
The risk of manipulation is real, but it also highlights our responsibility as captains of the digital world.
As we continue to explore AI’s presence, influence and potential, I hope that all of us remembers that vigilance, awareness, and ethical engagement are not optional. They are essential.
This was the second part of a 5 part series talking about the nature and affects of AI to our future. Part 3 examines more human-centered problem of AI: social media and the collapse of connection. How can we live and thrive in a world where trust itself is ever more fragile?
Inspired by a recent client company webinar on AI and cybersecurity, which explored the subtle ways AI influences decisions, the risks of data poisoning, and the growing need for critical awareness in digital spaces.
Inka Vuorinen is a Design and Foresight Coach who helps organizations harness the power of foresight to design future-ready services and teams. Through their work at Momentum Design, they guide companies in building internal foresight capabilities so they can anticipate change, innovate with confidence, and stay ahead of disruption. Inka’s approach combines futures thinking with radical creativity, empowering teams to explore new possibilities and turn insights into actionable strategies. Their workshops are designed for organizations that want to move beyond short-term problem-solving and start shaping their own futures. Inka collaborates with forward-thinking companies that are ready to invest in their future and foster a culture of continuous innovation. Read more: www.momentum-design.fi
A 30-day challenge designed for authentic thought leaders who want to become unignorable on LinkedIn.Build trust, master storytelling, and attract the right audience without relying on AI or clickbait hacks.Includes 28 emails with over 100 customizable post ideas, helping you show up confidently, optimize your profile, and grow your credibility, engagement, and sales.
