

In the movies machines start to gain consciousness after they have been trained enough and begin to learn by themselves. Already in the early 1900’s people argued whether or not machines could think or be conscious. Alan Turing created the first versions of the modern computer and a test (Imitation game) that was later named as Turing test and is widely used in testing machines intelligence. In this article I’m diving into a scenario where AI indeed acts autonomously. What if the unsettling possibility becomes true? What happens when machines begin to act on their own? And is it already happening?
In the first three parts of this series, we’ve explored the quiet presence of AI, the dangers of poisoned data, and the fragile reality of digital connection.
AI systems are increasingly capable of executing tasks without direct human intervention. They write code, optimize logistics, flag anomalies, and even initiate interactions. At first, these acts feel controlled, predictable and convenient. But each autonomous action carries weight, and with that power comes profound responsibility. You can now create AI agents, to which you can give a role and they will execute tasks within that role.
In the paperclip maximizer thought experiment created in 2003, Nick Bostrom from Oxford University argues about the benefits and dangers of an artificial intelligence capable of independent thinking aka superintelligence. He points out that the foundation and initial motivations of this superintelligence, needs to be set with immense consideration and care for humans. Otherwise we might create a superintelligence that does not care if humans have what they need to survive and thrive.
Although an unstoppable AI only creating paperclips is unlikely, an AI that does not treat all humans the same, seems highly likely. All the AIs are trained with tons of material and data that already exist. Material that is outdated, sexist and racist. This leads to outcomes that reinforce already existing societal biases.
Automating seemingly standard processes could actually make a huge impact in the world we are living. AI models have been found to be deeply biased against women and minorities. For example this scientific article in Nature, from October 2025, reveals that generative AI has biases against older women in job search. The researchers found that “...when generating resumes, ChatGPT not only assumes that women are younger, but also that they have less overall experience. Consequently, ChatGPT is biased towards giving lower scores to resumes from younger women compared with older women while giving the highest scores to older men.” Personally, I have witnessed multiple conversations with women who have been low-balled by AI chats just because of their gender.
The obvious answer would be to curate the training material better and root out harmful biases. Unfortunately, even when this has been in the center of focus in the last years, researchers have found more subtle and integrated biases in the training data. Another study from Nature has found that AI models can foster deep racist stereotypes about how people talk. Specifically the study found that “language models are more likely to suggest that speakers of [African American English] be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death.” . This is a particularly big problem when we are using AI in hiring, academic assessment, and legal accountability or any other instance that requires assessment of a persons character or credibility.
Even when human supervision is involved in the decision making AI’s biases remain as this recent study found. The study tested how a racially biased AI recommendation system effected a simulated hiring decision. When using a nonbiased model the participants selected candidates at and equal rate but when using the biased model they aligned with the models racial biases 90% of the time. This shows that AI models that are biased, increase the biases of human who use them.
When the training material is not curated or programming does not have clear guidelines for equality, we are in big trouble. As I stated in the previous article about poisoning the AI, the rotten material and it’s affects are very difficult to clean from the AI’s system.
Autonomous AI forces us to confront ethical questions in ways we rarely had to before. Who is accountable when a machine’s decision causes harm? Can we trust systems whose “thinking” we only partially understand? How can we embed positive human values into intelligence that learns faster than we can fully track? How can we teach an AI to be fair, when we as humans are not acting fair.
The answers are not simple nor easy. They require a blend of foresight, humility, and constant vigilance, and maybe most of all a strong will to change things. We must carefully consider not only what AI can do, but what it should do and who should program it. I’m questioning the fact that we are calling AI an intelligence at all, since it is so profoundly different than intelligence of a human or an animal. What kind of intelligence has no empathy but is deeply biased, can’t understand emotions but can manipulate people into committing selfharm?
Autonomy magnifies both potential and risk, demanding a level of stewardship we haven’t always exercised with human institutions.
The autonomous, intelligent and human-like AI carries a lot of weight. It reminds us that autonomy is not freedom from responsibility. It is a call to vigilance, reflection, and care. We must observe, question, and guide; Not because machines are malicious, but because they are consequential and crafted by humans. AI’s decisions ripple outward, influencing the physical and digital landscapes we inhabit.
We are challenged to examine ourselves.
The answers are as much about humanity as they are about technology.
Autonomous AI is neither hero nor villain. It is a mirror, a tool, and a partner. Its emergence challenges us to think differently about agency, responsibility, and ethics. The question is not whether machines will act — they already do — but how we will respond, and what kind of future we choose to build with them.
This was the fourth part of a 5 part series talking about the nature and effects of AI on our future.
In the final part of this series, I will guide you towards hope: the human potential to shape AI, trust wisely, and create the future we truly want. It is a reminder that even in the presence of autonomous minds, the story is still ours to write.
Inka Vuorinen is a Design and Foresight Coach who helps organizations harness the power of foresight to design future-ready services and teams. Through their work at Momentum Design, they guide companies in building internal foresight capabilities so they can anticipate change, innovate with confidence, and stay ahead of disruption. Inka’s approach combines futures thinking with radical creativity, empowering teams to explore new possibilities and turn insights into actionable strategies. Their workshops are designed for organizations that want to move beyond short-term problem-solving and start shaping their own futures. Inka collaborates with forward-thinking companies that are ready to invest in their future and foster a culture of continuous innovation. Read more: www.momentum-design.fi
A 30-day challenge designed for authentic thought leaders who want to become unignorable on LinkedIn.Build trust, master storytelling, and attract the right audience without relying on AI or clickbait hacks.Includes 28 emails with over 100 customizable post ideas, helping you show up confidently, optimize your profile, and grow your credibility, engagement, and sales.
