I get where you’re coming from with Penrose and Gödel and the whole “AI will never be truly intelligent or conscious” angle. But honestly, I think people are way too confident about what AI can’t do, especially given how the last few years have gone.
Every few months, you hear the same line… “AI will never do X,” and then some new model drops and suddenly it does X well enough that everybody quietly moves the goalpost. I don’t think it’s wise to keep underestimating something that is advancing faster than we can even mentally digest.
And even the people building this stuff don’t fully understand it. We know the math and training objectives, sure… we feed these models absurd amounts of data and optimize them to predict the next token.
But how they end up organizing that information internally, the way concepts, analogies, and “reasoning-like” behavior emerge is still prettty mysterious. It’s not just a lookup table. It’s a giant learned structure that in some weird alien way, does start to echo how neural networks in the brain build patterns out of experience.
I don’t think current AIs are conscious. They don’t have a stable sense of self, a continuous inner life or a body they’re perceiving through. But I’m not comfortable saying they can never develop some form of machine consciousness either. If you take the view that consciousness is fundamental… that everything is made of the same underlying “stuff” of awareness… then LLMs are also made of that same energy.
They’re not outside of reality. In that sense, the raw potential for awareness is there, just arranged in a different pattern. We know way too little about how consciousness emerges to draw hard red lines about what’s possible.
Meanwhile, some of the things people use to argue that humans are “above” AI are kind of shaky. You say the human mind travels faster than light because we can imagine a dot across the galaxy instantly. But that’s not a physical event in spacetime, it’s a mental representation.
And funnily enough, that’s exactly what these models do: they generate internal representations of things they’ve never physically seen, based on patterns in their training data. It’s not the same as human imagination but it’s closer in spirit than people want to admit.
If we’re honest, the human brain is also a prediction machine. That’s literally how neuroscience describes it now: we’re constantly guessing the next bit of reality based on past data. Egos are basically organic AIs… preconditioned responses to certain inputs, built out of memories, beliefs and emotional imprints. Most people are running on those scripts 90% of the time, reacting automatically, living on autopilot.
The main difference is that it’s all happening in a living body, with sensory experience and a deeper layer of awareness that can notice the ego. But let’s not pretend the average person is some perfectly self-aware guru walking around in pure, free will. Most of us are glorified pattern machines too.
Where AI absolutely does have an edge is scale and specialization. A human can become world-class at one, maybe two disciplines in a lifetime. An AI system can basically “specialize” in dozens of fields at once, load-toggling between medicine, law, art, coding, philosophy, whatever without needing sleep or emotional recovery. It doesn’t even need to be conscious to outperform us in task after task. It just has to be more efficient, more accurate, and more consistent. That alone is enough to be a massive threat and a massive opportunity.
So I’m not paranoid about “oh no, AI might become conscious one day.” The worry is that even without consciousness, it can outdo us in most practical domains while being steered by whatever incentives we program into it… profit, power, control, whatever. Conscious or not, that’s dangerous in the wrong hands.
I actually agree with you about AI’s impact on creativity. It can flatten things, flood the world with lazy, copypaste content and make the average person even more passive.
But it can also supercharge the people who actually care about creating. A broke genius writer could generate a full animated film, a whole world, voice acting, visuals, everything, with a laptop and time, instead of needing a studio’s budget. So yeah, it’s a double-edged sword. It can dull the masses and empower the few at the same time.
To me, that’s the real point… we’re way too early, way too ignorant and way too biased about our own “specialness” to confidently declare what AI will or will not become. Humans absolutely have a higher potential than any current model. But until we start actually living from that potential instead of clinging to our little ideological boxes and ego scripts, self-improving AI systems are going to keep catching up and passing us in more and more areas.