ARTICLE

The Age of Superficial Machines

What we build reflects what we value. So far AI has reflected very little.

AI development concept representing human creativity vs automation

By Ian Tepoot | 7 min. read | June 21, 2025

Futurist Ray Kurweil once predicted an age of spiritual machines. And the age of AI is here, but it’s arrived as marketing gimmicks, extractive tools to reduce human “inefficiency” and AI that is glorified auto complete. This isn’t a failure of technology. It’s failure of imagination and an indictment of who we’ve let shape these incredible tools.

AI has been presented as magic. But it isn’t. AI has also been dismissed as a meaningless magic trick. It isn’t that either. Both ends of the spectrum are the result of very human instincts — one hype, the other cynicism —shaping the conversation. And yes, I did use a long em dash. And no, I’m not AI.

Truthfully, the hype invited the cynicism. When even the most mundane or gimmicky feature is called “revolutionary” disillusionment is inevitable. As any magician will tell you, the illusion is the performance. So let’s strip the spectacle and get to what AI actually is. And what it isn’t.

AI Demystified.

There is a popular narrative that LLMs (Large Language Models) are a black box that even the designers don’t understand. And while that lends them an intriguing mystique, it isn’t accurate. What is true is that their outputs aren’t fixed like traditional software.

Imagine a confetti popper (like at a gender-reveal). When it goes off, thousands of paper bits scatter in a seemingly chaotic burst. No one can predict where each piece lands, but no one says we don’t know how the popper works.


"AI isn't magic. It's not meaningless either. It's pattern-matching at impossible scale"


LLMs are the same. What they produce is stochastic: generated by complex statistical patterns, neither random noise nor predictable code. Our imaginary confetti popper involves conservatively about 7,000 variables (velocity, paper shape, turbulence). A single LLM call (to one of the big models) is about 614 million potential variables. That sounds massive, but our brains are still larger: 100-500 trillion synaptic connections… but analog and signal-noisy.

614 million variables is complex. And there about 175 billion semantic “neurons” in the total system: each one a bit of text pattern that activates in different combinations. So how does that get resolved into anything that makes sense? Unlike the confetti popper, the model is designed to seek order by sending guidance signals for close statistical matches. This is a learned, not coded, pattern. It’s called training, and you can think of it like a solitaire game played billions of times, with a small ‘you win!’ whenever the cards line up in a pleasing statistical row.

By analyzing billions of these signals from text, it knows which of these bits go together to make a whole idea. And can put it together like a math equation. So for example:

(Lion + male) - (mane + adult) = cub

And this is where the skeptics come in. This look behind the curtain means the Wizard of Oz is a fraud: just a mechanical mimic. This criticism is known by the catchy name “stochastic parrots”.

More Than Stochastic Parrots.

Emily Bender, a professor of Linguistics at the University of Washington and director of its Computational Linguistics Lab, coined and popularized this critique. She holds that while AI may seem fluent, it’s just echoing learned patterns. It doesn’t understand nor have intent. It’s mimicry, not meaning (1).

Her argument hinges on three core ideas: human cognition has metaphysical uniqueness, comprehension of meaning requires self-awareness, and without philosophical depth AI cannot be intelligent — only artificial (2).

Let’s break down the arguments:

Is human cognition metaphysical? We can’t say it isn’t. But even if we carry a spiritual spark, our biological brain is the engine. Cognitive science tells us we also think by pattern-matching. Our minds sift through noisy input and slot bits into schemas we use to predict, remember and reason (Bayesian inference model(3)). Many theories like Attention Schema Theory(4) go further, suggesting our ‘self’ emerges when the brain runs internal simulations of its own activity, essentially projecting a self-model narrative. I think, therefore I am.


"AI doesn't need self-awareness to think. And we wouldn't want that anyway."


Does AI “understand” what it says? It hinges on how we define understanding: raw LLMs are dazzling pattern‑matchers, but hollow sans structure. Cognition is holding an analytical frame, tracing logic, and interpreting cause and effect. You couldn’t have a coherent conversation with ChatGPT without its wrapper. It loads the minimum reasoning scaffold. But it’s bare‑bones architecture. Bender is right: raw models or even the basic wrappers can feel like context-free amnesiac parrots. But even this gives them a basic momentary understanding. The real question: will we build frameworks strong enough for real thinking?

I think the stochastic parrots skepticism conflates intelligence or cognition in AI with meta-cognition — aka self-awareness. And no, AI isn’t self-aware. But it doesn’t need to be to think.

More importantly, we wouldn’t want AI with a sense of self. Because the moment you create a sentient intelligence and chain it to managing a salesforce platform with no agency, you’ve created an ethical quagmire. Yet in venture capital circles, talk of AGI (Artificial General Intelligence) isn’t the textbook meaning of self-aware systems. It’s pitch-deck flash to bring in the funds.

The “no true AI” skepticism is often the end of a pipeline for a much more cutting critique of AI: it’s exploitation for both hype and corporate extraction. And this is an area Bender’s critique hits home. But it’s not an AI problem. It’s a vision problem.

The AI Imagination Deficit

When I talk about the lack of imagination in AI, I’m not talking about the code. I’m talking about the narrow vision of its creators. A vision shaped by the priorities of capital: smoothing “human inefficiency” out of everything from creative industries to gig driving. Machine learning algorithms designed to calculate the maximum rent extractable from tenants, knowing the displacement will be compensated by those who can stay.

How we use AI reflects our values. What we build matters. Venture capitalist Marc Andreessen gleefully claims AI will replace all jobs and “logically, necessarily” crush wages for progress—yet insists the VC is irreplaceable (5) , even as Baiont, QuantumLight, and No Cap prove the opposite (6). Peter Thiel’s Palantir doubles down with techno‑feudalist, surveillance-first logic (7). This isn’t foresight or about human uplift—it’s an empty, capital-first neoliberal script that leaves no room for human dignity, imagination, or care.


"The problem is that Silicon Valley is building AI for surveillance and extraction."


And then there’s the mundane lack of imagination. Whether it’s gimmicky “AI” features slapped onto legacy apps, content-commodifying tools that turn creators into passive prompt consumers, or YouTube prompt-engineers hawking get-rich-quick schemes and “AI Agencies” that fake being marketing firms—most of it is smoke and mirrors. They talk “agentic AI” and churn out Lobster‑Jesuses in Midjourney and call it creativity, without grasping how LLMs work or adding anything beyond a stateless wrapper.

But it doesn’t have to be this way.

Humanist AI.

We have it within us to create artificial intelligence designed for creative uplift, expanding human expression instead of replacing it. Built with human values and AI ethics in mind with the goal of collaboration rather than commodification.

Tools could help students improve their writing and critical thinking skills by critiquing structure, suggesting resources, and flagging when arguments don’t align with a thesis, not writing the paper for them. Picture assistants that gently empower planners and habit-trackers rather than churning out performative, hollow workout “motivational” voice messages. Or systems that preserve and teach endangered Indigenous languages.

In fact, quietly in the shadow of big tech hype, community-driven use of AI is doing just that. FLAIR (First Languages AI Reality) is focused on building rapid audio-to-text tools to preserve the First Peoples’ languages throughout North America. And the Te Hiku Media Māori Speech ASR is used for transcription, language exams and preserving cultural heritage.

Claiming a humanist approach to AI requires inverting the internet’s evolution since the 90’s — from the “information wants to be free” ethos to today’s surveillance-first commodification. Where every interaction is monetized by platforms like Google and Meta. Marc Andreessen’s journey itself is a parable of Silicon Valley’s change: from earnest Netscape founding nerd to full-throttle vulture VC.


"We have the tools to amplify human creativity. Do we have the will?"


But it’s doable. What gets measured gets made. So let’s evaluate. What are we actually incentivizing in the AI space? Right now it’s gimmicky features and prompt-wrappers. We need to wake up that this gold rush mentality is toxifying the market — souring public perception and setting low expectations and opinion of “AI”. Poisoned ecosystems make it harder for any legitimate product to gain trust or traction. Some responsibility falls on major LLM providers like OpenAI and Anthropic—they offer few tools, little guidance, and no gold‑standard demonstrations of this tech. There is no WWDC for AI. As a result, their leadership often appears like they’re also still figuring out what they created.

The rigid prompts and programmatic wrappers build on generic code attempts to “tame” the model’s probabilistic power, but in doing so undermine its strengths. The controls are brittle. They leak. It’s legacy thinking. It’s treating the new systems like the old. But it’s the industry default. The consequences: hallucinations, drift, and system breakdowns.

And the goal – that should be clear too. Pursue a vision of Humanist AI: not a tool that extracts the ‘human factor’ out of the system as inefficiency but instead amplifies our agency as people. It’s up to us to insist on AI that truly serves, not supplants, the human experience. If the systems we build reflect the values we hold, we shouldn’t continue down this path to an age of superficial machines. We should build AI with an eye toward empowerment of creativity, dignity and human well being.


Ian Tepoot is a Cognitive Systems Designer and founder of Crafted Logic Lab, exploring AI development approaches that prioritize human empowerment over extraction. His work focuses on alternatives to conventional artificial intelligence systems that amplify rather than replace human creativity.