We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • hera@feddit.uk
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    14
    ·
    16 hours ago

    Philosophers are so desperate for humans to be special. How is outputting things based on things it has learned any different to what humans do?

    We observe things, we learn things and when required we do or say things based on the things we observed and learned. That’s exactly what the AI is doing.

    I don’t think we have achieved “AGI” but I do think this argument is stupid.

    • ArbitraryValue@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      15 hours ago

      Yes, the first step to determining that AI has no capability for cognition is apparently to admit that neither you nor anyone else has any real understanding of what cognition* is or how it can possibly arise from purely mechanistic computation (either with carbon or with silicon).

      Given the paramount importance of the human senses and emotion for consciousness to “happen”

      Given? Given by what? Fiction in which robots can’t comprehend the human concept called “love”?

      *Or “sentience” or whatever other term is used to describe the same concept.

      • hera@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 minutes ago

        This is always my point when it comes to this discussion. Scientists tend to get to the point of discussion where consciousness is brought up then start waving their hands and acting as if magic is real.

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      edit-2
      15 hours ago

      How is outputting things based on things it has learned any different to what humans do?

      Humans are not probabilistic, predictive chat models. If you think reasoning is taking a series of inputs, and then echoing the most common of those as output then you mustn’t reason well or often.

      If you were born during the first industrial revolution, then you’d think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        12 hours ago

        If you were born during the first industrial revolution, then you’d think the mind was a complicated machine. People seem to always anthropomorphize inventions of the era.

      • FourWaveforms@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        11 hours ago

        When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

    • counterspell@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      15 hours ago

      No it’s really not at all the same. Humans don’t think according to the probabilities of what is the likely best next word.

    • middlemanSI@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      15 hours ago

      Most people, evidently including you, can only ever recycle old ideas. Like modern “AI”. Some of us can concieve new ideas.