• Kyrgizion@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    1 month ago

    It’s not anytime soon. It can get like 90% of the way there but those final 10% are the real bitch.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      edit-2
      1 month ago

      The AI we know is missing the I. It does not understand anything. All it does is find patterns in 1’s and 0’s. It has no concept of anything but the 1’s and 0’s in its input data. It has no concept of correlation vs causation, that’s why it just hallucinates (presents erroneously illogical patterns) constantly.

      Turns out finding patterns in 1’s and 0’s can do some really cool shit, but it’s not intelligence.

      • Monstrosity@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        1 month ago

        This is not necessarily true. While it’s using pattern recognition on a surface level, we’re not entirely sure how AI comes up with it’s output.

        But beyond that, a lot of talk has been centered around a threshold when AI begins training other AI & can improve through iterations. Once that happens, people believe AI will not only improve extremely rapidly, but we will understand even less of what is happening when an AI black boxes train other AI black boxes.

        • Coldcell@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 month ago

          I can’t quite wrap my head around this, these systems were coded, written by humans to call functions, assign weights, parse data. How do we not know what it’s doing?

          • Kyrgizion@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            1 month ago

            Same way anesthesiology works. We don’t know. We know how to sedate people but we have no idea why it works. AI is much the same. That doesn’t mean it’s sentient yet but to call it merely a text predictor is also selling it short. It’s a black box under the hood.

            • Coldcell@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 month ago

              Writing code to process data is absolutely not the same way anesthesiology works 😂 Comparing state specific logic bound systems to the messy biological processes of a nervous system is what gets this misattribution of ‘AI’ in the first place. Currently it is just glorified auto-correct working off statistical data about human language, I’m still not sure how a written program can have a voodoo spooky black box that does things we don’t understand as a core part of it.

              • irmoz@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                1 month ago

                The uncertainty comes from reverse-engineering how a specific output relates to the prompt input. It uses extremely fuzzy logic to compute the answer to “What is the closest planet to the Sun?” We can’t know which nodes in the neural network were triggered or in what order, so we can’t precisely say how the answer was computed.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            It’s a bit of “emergent properties” - so many things are happening under the hood they don’t understand exactly how it’s doing what it’s doing, why one type of mesh performs better on a particular class of problems than another.

            The equations of the Lorenz attractor are simple, well studied, but it’s output is less than predictable and even those who study it are at a loss to explain “where it’s going to go next” with any precision.

          • The_Decryptor@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 month ago

            Yeah, there’s a mysticism that’s sprung up around LLMs as if they’re some magic blackbox, rather than a well understood construct to the point where you can buy books from Amazon on how to write one from scratch.

            It’s not like ChatGPT or Claude appeared from nowhere, the people who built them do talks about them all the time.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 month ago

        Distill intelligence - what is it, really? Predicting what comes next based on… patterns. Patterns you learn in life, from experience, from books, from genetic memories, but that’s all your intelligence is too: pattern recognition / prediction.

        As massive as current AI systems are, consider that you have ~86 Billion neurons in your head, devices that evolved over the span of billions of years ultimately enabling you to survive in a competitive world with trillions of other living creatures, eating without being eaten at least long enough to reproduce, back and back and back for millions of generations.

        Current AI is a bunch of highly simplified computers with up to hundreds of thousands of cores. Like planes fly faster than birds, AI can do some tricks better than human brains, but mostly: not.