• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Your comment is simply counterfactual. I do indeed find LLMs to be useful. Saying “no you don’t!” Is frankly ridiculous.

    I’m a computer programmer. Not directly experienced with LLMs themselves, but I understand the technology around them and have written program that make use of them. I know what their capabilities and limitations are.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Your claim that it’s capable of doing what it claims isn’t just false.

      It’s an egregious, massively harmful lie, and repeating it is always extremely malicious and inexcusable behavior.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        I have genuinely found LLMs to be useful in many contexts. I use them to brainstorm and flesh out ideas for tabletop roleplaying adventures, to write song lyrics, to write Python scripts to do various random tasks. I’ve talked with them to learn about stuff, and verified that they were correct by checking their references. LLMs are demonstrably capable of these things. I demonstrated it.

        Go ahead and refrain from using them yourself if you really don’t want to, for whatever reason. But exclaiming “no it doesn’t!” In the face of them actually doing the things you say they don’t is just silly.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          They absolutely cannot reliably summarize the result of searches, like this post is about, and OP in and of itself proves conclusively.

          Any meaningful rate of failures at all makes them massively, catastrophically damaging to humanity as a whole. “Just don’t use them” absolutely does not prevent their harm. Pushing them as competent is extremely fucking unacceptable behavior.

          And this is all completely ignoring the obscene energy costs associated with making web searches complete and utter dogshit.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            They absolutely cannot reliably summarize the result of searches, like this post is about

            The problem is that it did summarize the result of this search, the results of this search included one of those “if the Earth was the size of a grain of sand, Alpha Centauri would be X kilometers away” analogies. It did exactly the thing you’re saying it can’t do.

            Any meaningful rate of failures at all makes them massively, catastrophically damaging to humanity as a whole.

            Nothing is perfect. Does that make everything a massive catastrophic threat to humanity? How have we managed to survive for this long?

            You’re ridiculously overblowing this. It’s a “ha ha, looks like AI made a whoopsie because I didn’t understand that I actually asked it to do” situation. It’s not Skynet coming to convince us to eat cyanide.

            And this is all completely ignoring the obscene energy costs associated with making web searches complete and utter dogshit.

            Of course it’s ignoring that. It’s not real.

            You realize that energy costs money? If each web search cost an “obscene” amount, how is Microsoft managing to pay for it all? Why are they paying for it? Do you think they’ll continue paying for it indefinitely? It’d be a completely self-solving problem.

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Summaries distinguish substance from nonsense. It cannot be described as a summary of a piece of content if it does not accurately portray the substance of that content.

              LLMs aren’t imperfect. They’re dumpster fire misinformation machines with no redeeming qualities. Of course it’s not Skynet. Skynet was intelligent. This isn’t within 100 orders of magnitude of intelligence.

              Companies burn obscene amounts of money on moonshots all the time, even ones that have no possibility of success. Willingness to lose billions burning energy to degrade every single search made is not an indication that it’s not a nightmare for the environment (again, for literally no purpose because every single search with an LLM is worse than without it).

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                3 months ago

                No, a summary is just a condensed version of some larger work. If the larger work contains bullshit then so can the summary, that doesn’t stop it from being a summary. As you say, a summary accurately portrays the substance of that content. In this case there was content that said Alpha Centauri was 13 km from Earth, so the summary said that too.

                This is really not complicated.

                Companies burn obscene amounts of money on moonshots all the time, even ones that have no possibility of success.

                If you think it has no possibility of success, sit back and relax as AI goes away.

                • ipkpjersi@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  3 months ago

                  If you think it has no possibility of success, sit back and relax as AI goes away.

                  Yep. This is exactly it, and this is what people don’t seem to understand. AI is not going away, because it is actually useful, it has actual uses and people are actively using it. It’s not entirely fluff based pointless technology like blockchain etc, it is actually useful and real-world people use AI/LLMs.