• sinceasdf@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      ·
      28 days ago

      This piqued my curiosity so I dug into it a bit on Wikipedia. Most worms are dumb as fuck, roundworms are about as dumb as they come with total neuron counts for a roundworm being comparable to a microscopic tartigrade (300 vs 200). Most of this is located in the head of the worm in a brain like structure though, so I’m betting the clones develop their brains independently with no information transfer. I doubt there’s a ton of learning/memory forming going on at all though, based on how simple worms are, so it’s probably functionally identical. I would be surprised if most worm species exhibit any kind of learned behaviors ever.

      https://en.m.wikipedia.org/wiki/List_of_animals_by_number_of_neurons

      • Dragon Rider (drag)@lemmy.nz
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        25
        ·
        28 days ago

        Techbros will still claim that generative AI possesses less intelligence than the worms as an excuse to keep enslaving them.

        • Bezier@suppo.fi
          link
          fedilink
          English
          arrow-up
          16
          ·
          28 days ago

          The AI that tech bros sell is not alive and does not have “intelligence.”

          • Dragon Rider (drag)@lemmy.nz
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            18
            ·
            28 days ago

            Does it have more than a worm with only 300 neurons in its brain, or are you one of those crazy religious people who thinks meat is the only thing in the universe that can think because it’s magic or something?

            • Bezier@suppo.fi
              link
              fedilink
              English
              arrow-up
              14
              ·
              28 days ago

              Neither. Why are those the only two options? My answer is that I have spent a little bit of time looking into how these things actually work. It’s surface level only, but it should be enough. Are you one of those crazy people who thinks chatgpt is sentient?

              I’m not saying that a “real” AI cannot be built ever, but I for sure am saying that these image generators and chatbots are not it. AI tools are just functions that have no thought. If they start building products with some kind of continuous brain simulations, I’ll seriously rethink my stance.

              • Dragon Rider (drag)@lemmy.nz
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                18
                ·
                28 days ago

                Those are the only two options because you chose to argue with drag’s point about generative AI being smarter than a worm. You took this bait willingly. You devoted yourself to trying to prove a worm is smarter than ChatGPT. Nobody asked you to do it, you just decided this was what you were going to do today. It’s weird, why would you do that?

                • Bezier@suppo.fi
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  28 days ago

                  I have no clue what you’re trying to prove, but I think I’m done with this conversation.

                • homicidalrobot@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  edit-2
                  27 days ago

                  Nobody devoted themselves to shit, you’ve interjected with actual insane person comments about the topic. The AI isn’t alive and doesn’t even resemble life. You do not understand generative AI on a basic level. You do not understand how responses are generated or what’s going on with a prompt and response.

                  You need help. Not from me, not from social media. Try a social worker. Magical thinking like this points to some pretty unfortunate problems for you on a personal level, and it would behoove you and relieve everyone you know to get it figured out.

                  e: your comment history is FULL of “the AI is alive we should be nice” type posts. PLEASE seek professional help.

            • ilost7489@lemmy.ca
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              27 days ago

              Ai (as in current LLM’s and the like) does not think. It predicts what word sounds right based on what we humans have written. It cannot make up thoughts or original concepts, synthesize info, etc. Being able to string sentences together based on probability is not necessarily intelligence or consciousness

                • ilost7489@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  24 days ago

                  It’s arguable whether the worm has intelligence of any kind, after all it wouldn’t even need it.Neither the worm or AI has any intelligence to compare because they don’t really think at all

                  AI isn’t called AI because it can think. AI is just a tech buzzword for predictive algorithms

                • skulblaka@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  26 days ago

                  Well, yes, it is. It doesn’t meet the minimum definition for sentience, let alone intelligence. You may as well be upset with how poorly we treat rocks.

                  Actually now that I think about it, you are upset with how we treat rocks. Computer chips are just silicon shot full of lightning and an AI is a function of its chips. We could eventually reach a point where we’ve created a true thinking AI on this substrate but we are so hilariously far away from even the beginnings of that, right now, that using it as a talking point is silly.

            • VeganCheesecake@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              3
              ·
              28 days ago

              Neither the worm, nor current LLMs, are sapient.

              Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?

              • Dragon Rider (drag)@lemmy.nz
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                5
                ·
                28 days ago

                The sweet release of death.

                Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.

                Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?

                • VeganCheesecake@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  27 days ago

                  End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.

                  I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research that deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.

                  I feel like the last part is something the AI from the paperclip thought experiment would do.

        • Asidonhopo@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          28 days ago

          I can see why Techbros would want such gorgeous invertebrates as pets but as long as they have enough enrichment in their enclosure but I would hardly call keeping these primitive worms slavery. Any kind of exotic pet always raises questions of ethicality so I understand why you’d be concerned. Do you personally know some people in the tech industry that keep these? How big a terrarium do they need and what kinds of plants and substrate do they prefer?

  • BudgetBandit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    23
    ·
    28 days ago

    Salt, sun, dryness and especially FIRE are needed to kill them, they’re invasive, endanger local flora/fauna and don’t have enough natural predators.