I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

  • YappyMonotheist@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    3 days ago

    Intelligence, as the word has always been used, requires awareness and understanding, not just spitting out data after input, as dynamic and complex the process might be, through a set of rules. AI, as you just described it, does nothing necessarily different from other computational tools: they speed up processes that can be calculated/algorithmically structured. I don’t see how that particularly makes “AI” deserving of the adjective ‘intelligent’, it seems more of a marketing term the same way ‘smartphones’ were. The disagreement we’re having here is semantic…

    • SkyeStarfall@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      3 days ago

      The funny thing is, is that the goalposts on what is/isn’t intelligent has always shifted in the AI world

      Being good at chess used to be a symbol of high intelligence. Now? Computer software can beat the best chess players in a fraction of the time used to think, 100% of the time, and we call that just an algorithm

      This is not how intelligence has always been used. Moreover, we don’t even have a full understand of what intelligence is

      And as a final note, human brains are also computational “tools”. As far as we can tell, there’s nothing fundamentally different between a brain and a theoretical Turing machine

      And in a way, isn’t what we “spit” out also data? Specifically data in the form of nerve output and all the internal processing that accompanies it?