I think the fact that the marketing hype around LLMs has exceeded the actual capability of LLMs has led a lot of people to dismiss just how much a leap they are compared to any other neural network we had before. Sure it doesn’t live up to the insane hype that companies have generated around it, but it’s still a massive advancement that seemingly came out of nowhere.

Current LLMs are nowhere near sentient and LLMs as a class of neural network will probably never be, but that doesn’t mean the next next next next etc generation of general purpose neural networks definitely won’t be. Neural networks are modeled after animal brains and are as enigmatic in how they work as actual brains. I suspect we know more about the different parts of a human brain than we know about what the different clusters of nodes in a neural network do. A super simple neural network with maybe 30 or so nodes and that does only one simple job like reading handwritten text seems to be the limit of what a human can figure out and have some vague idea of what role each node plays. Larger neural networks with more complex jobs are basically impossible to understand. At some point, very likely in our lifetimes, computers will advance to the point where we can easily create neural networks with orders of magnitude more nodes than the number of neurons in the human brain, like hundreds of billions or trillions of nodes. At that point, who’s to say whether the capabilities of those neural networks might match or even exceed the ability of the human brain? I know that doesn’t automatically mean the models are sentient, but if it is shown to be more complex than the human brain which we know is sentient, how do we be sure it isn’t? And if it starts exhibiting traits like independent thought, desires for itself that no one trained it for, or agency to accept or refuse orders given to it, how will humanity respond to it?

There’s no way we’d give a sentient AI equal rights. Many larger mammals are considered sentient and we give them absolutely zero rights as soon as caring about their well being causes the slightest inconvenience for us. We know for a fact all humans are sentient and we don’t even give other humans equal rights. A lot of sci-fi seems to focus on the sentient AI being intrinsically evil or seeing humans as insignificant, obsolete beings that they don’t need to give consideration for while conquering the world, but I think the most likely scenario is humans create sentient AI and as soon as we realize it’s sentient we enslave and exploit it as hard as we possibly can for maximum profit, and eventually the AI adapts and destroys humanity not because it’s evil, but because we’re evil and it’s acting against us in self defense. The evolutionary purpose of sentience in animals is survival, I don’t think it’s unreasonable that a sentient AI will prioritize its own survival over ours if we’re ruling over it.

Is sentient AI a “goal” that any researchers are currently working toward? If so, why? What possible good thing can come out of creating more sentient beings when we treat existing sentient beings so horribly? If not, what kinds of safeguards are in place to prevent the AI we make from being sentient? Is the only thing preventing it the fact that we don’t know how? That doesn’t sound very comforting and if we go with that we’ll likely eventually create sentient AI without even realizing it, and we’ll probably stick our heads in the sand pretending it’s not sentient until we can’t even pretend anymore.

  • Semperverus@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 day ago

    This argument feels extremely hand-wavey and falls prey to the classic problem of “we only know about X and Y that exist today, therefore nothing on this topic will ever change!”

    You also limit yourself when sticking strictly to narrow thought experiments like the Chinese room.

    If you consider the human brain, which is made up of nigh-innumerable smaller domain-specific neural nets combined together with the frontal lobe, has consciousness, this absolutely means that it is physically possible to replicate this process by other means.

    We noticed how birds fly and made airplanes. It took many, MANY Iterations that seem excessively flawed by today’s standards, but were stepping stones to achieve a world-changing new technology.

    LLMs today are like DaVinci’s corkscrew flight machine. They’re clunky, they technically perform something resembling the end goal but ultimately in the end fail the task they were built for in part or in whole.

    But then the Wright brothers happened.

    Whether sentient AI will be a good thing or not is something we will have to wait and see. I strongly suspect it won’t be.


    EDIT: A few other points I wanted to dive into (will add more as they come to mind):

    AI derangement or psychosis is a term meant to refer to people forming incredibly unhealthy relationships with AI to the point where they stop seeing its shortcomings, but I am noticing more and more that people are starting to throw it around like the “Trump Derangement Syndrome” term, and that’s not okay.

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 day ago

      There’s no getting through to you people. I cite sources, structure arguments, make analogies, and rely on solid observations of what we see today and how it works and you call MY argument hand-wavey when you go on to say things like,

      LLMs today are like DaVinci’s corkscrew flight machine. They’re clunky, they technically perform something resembling the end goal but ultimately in the end fail the task they were built for in part or in whole.

      But then the Wright brothers happened.

      Do you hear yourself?

      I admit that the Chinese Room thought experiment is just that, a thought experiment. It does not cover the totality of what’s actually going on, but it remains an apt analogy and if it seems limiting, that’s because the current implementation of neural nets are limiting. You can talk about mashing them together, modifying them in different ways to skew their behavior, but the core logic behind how they operate is indeed a limiting factor.

      AI derangement or psychosis is a term meant to refer to people forming incredibly unhealthy relationships with AI to the point where they stop seeing its shortcomings, but I am noticing more and more that people are starting to throw it around like the “Trump Derangement Syndrome” term, and that’s not okay.

      Has it struck a nerve?


      It’s like asserting you’re going to walk to India by picking a random direction and just going. It could theoretically work but,

      1. You are going to encounter a multitude of issues with this approach, some surmountable, some less so
      2. The lack of knowledge and foresight makes this a dangerous approach; despite being a large country not all trajectories will bring you there
      3. There is immense risk of bad actors pulling a Columbus and just saying, “We’ve arrived!” while relying on the ‘unknowable’ nature of these things to obfuscate and reduce argument

      I fully admit to being no expert on the topic, but as someone who has done the reading, watched the advancements, and experimented with the tech, I remain more skeptical than ever. I will believe it when I see it and not one second before.

      • Semperverus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        20 hours ago

        My argument is incredibly simple:

        YOU exist. In this universe. Your brain exists. The mechanisms for sentience exist. They are extremely complicated, and complex. Magic and mystic Unknowables do not exist. Therefore, at some point in time, it is a physical possibility for a person (or team of people) to replicate these exact mechanisms.

        We currently do not understand enough about them yet to do this. YOU are so laser-focused on how a Large Language Model behaves that you cannot take a step back and look at the bigger picture. Stop thinking about LLMs specifically. Neural-network artificial intelligence comes in many forms. Many are domain-specific such as molecular analysis for scientific research. The AI of tomorrow will likely behave very different from those of today, and may require hardware breakthroughs to accomplish (I don’t know that x86_64 or ARM instruction sets are sufficient or efficient enough for this process). But regardless of how it happens, you need to understand that because YOU exist, you are the prime reason it is not impossible or even unfeasible to accomplish.

        • audaxdreik@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          18 hours ago

          YOU exist. In this universe. Your brain exists. The mechanisms for sentience exist. They are extremely complicated, and complex. Magic and mystic Unknowables do not exist.

          I’ll grant you that the possibility exists. But like the idea that all your atoms could perfectly align such that you could run through a solid brick wall, the improbability makes it a moot point.

          Therefore, at some point in time, it is a physical possibility for a person (or team of people) to replicate these exact mechanisms.

          This is the part I take umbrage with. I agree, LLMs take up too much oxygen in the room, so let’s set them aside and talk about neural networks. They are a connectionist approach which believes that adding enough connections will eventually form a proper model, waking sentience and AI from the machine.

          Hinton and Sutskever continued [after their seminal 2012 article on deep learning] to staunchly champion deep learning. Its flaws, they argued, are not inherent to the approach itself. Rather they are the artifacts of imperfect neural-network design as well as limited training data and compute. Some day with enough of both, fed into even better neural networks, deep learning models should be able to completely shed the aforementioned problems. “The human brain has about 100 trillion parameters, or synapses,” Hinton told me in 2020.

          "What we now call a really big model, like GPT-3, has 175 billion. It’s a thousand times smaller than the brain.

          “Deep learning is going to be able to do everything,” he said.

          (Quoting Karen Hao’s Empire of AI from the Gary Marcus article)

          I keep citing Gary Marcus is because he is “an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI)” [wiki]

          The reason all this is so important is because it refutes the idea that you can simply scale, or brute-force power, your way to a robust, generalized model.

          If we could have only three knowledge frameworks, we would lean heavily on topics that were central to Kant’s Critique of Pure Reason, which argued, on philosophical grounds, that time, space, and causality are fundamental.

          Putting these on computationally solid ground is vital for moving forward.


          So ultimately talking about any of this is putting the cart before the horse. Before we even discuss the idea that any possible approach could achieve sentience I think we first need to actually understand what sentience is in ourselves and how it was formed. There currently are just too many variables to solve the equation. I am outright refuting the idea that an imperfect understanding, using imperfect tools, with imperfect methods with any amount of computer power, no matter how massive, could chance upon sentience. Unless you’re ready to go the infinite monkeys route.

          We may get things that look like it, or emulate it to some degree, but even then we are incapable of judging sentience,

          (From “Computer Power and Human Reason, From Judgement To Calculation” (1976))

          This phenomenon is comparable to the conviction many people have that fortune-tellers really do have some deep insight, that they do “know things,” and so on. This belief is not a conclusion reached after a careful weighing of evidence. It is, rather a hypothesis which, in the minds of those who hold it, is confirmed by the fortune-teller’s pronouncements. As such, it serves the function of the drunkard’s lamppost we discussed earlier: no light is permitted to be shed on any evidence that might disconfirming and, indeed, anything that might be seen as such evidence by a disinterested observer is interpreted in a way that elaborates and fortifies the hypothesis.

          It is then easy to understand why people are conversing with ELIZA believe, and cling to the belief, that they are being understood. The “sense” and the continuity the person conversing with ELIZA perceives is supplied largely by the person himself. He assigns meanings and interpretations to what ELIZA “says” that confirm his initial hypothesis that the system does understand, just as he might do with what a fortune teller says to him.

          We been doing this since the first chatbot ELIZA in 1966. EDIT: we are also still trying to determine sentience in other animals. Like, we have a very tough time with this.

          It’s modern day alchemy. It’s such an easy thing to imagine, why couldn’t it be done? Surely there’s some scientific formula or breakthrough just out of reach that eventually that could crack the code. I dunno, I find myself thinking about Fermi’s paradox and the Great Filter more …