We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Yes, and that is precisely what you have done in your response.
You saw something you disagreed with, as did I. You felt an impulse to argue about it, as did I. You predicted the right series of words to convey the are argument, and then typed them, as did I.
There is no deep thought to what either of us has done here. We have in fact both performed as little rigorous thought as necessary, instead relying on experience from seeing other people do the same thing, because that is vastly more efficient than doing a full philosophical disassembly of every last thing we converse about.
That disassembly is expensive. Not only does it take time, but it puts us at risk of having to reevaluate notions that we’re comfortable with, and would rather not revisit. I look at what you’ve written, and I see no sign of a mind that is in a state suitable for that. Your words are defensive (“delusion”) rather than curious, so how can you have a discussion that is intellectual, rather than merely pretending to be?
No, I didn’t start by predicting a series of words, I already had thoughts on the subject, which existed completely outside of this thread. By the way, I’ve been working on a scenario for my D&D campaign where there’s an evil queen who rules a murky empire to the East. There’s a race of uber-intelligent ogres her mages created, who then revolted. She managed to exile the ogres to a small valley once they reached a sort of power stalemate. She made a treaty with them whereby she leaves them alone and they stay in their little valley and don’t oppose her, or aid anyone who opposes her. I figured somehow these ogres, who are generally known as “Bane Ogres” because of an offhand comment the queen once made about them being the bane of her existence - would convey information to the player characters about a key to her destruction, but because of their treaty they have to do it without actually doing it. Not sure how to work that yet. Anyway, the point of this is that the completely out-of-context information I just gave you is in no way related to what we were talking about and wasn’t inspired by constructing a series of relevant words like you’re proposing. I also enjoy designing and printing 3d objects and programming little circuit thingys called ESP32 to do home automation. I didn’t get interested in that because of this thread, and I can’t imagine how a LLM-like mental process would prompt me to tell you about it, or why I would think you would be interested in knowing anything about my hobbies. Anyway, nice talking to you. Cute theory you got there about brain function tho, I can tell you’ve know people inside out.