• Mniot@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    22 minutes ago

    I don’t think the article summarizes the research paper well. The researchers gave the AI models simple-but-large (which they confusingly called “complex”) puzzles. Like Towers of Hanoi but with 25 discs.

    The solution to these puzzles is nothing but patterns. You can write code that will solve the Tower puzzle for any size n and the whole program is less than a screen.

    The problem the researchers see is that on these long, pattern-based solutions, the models follow a bad path and then just give up long before they hit their limit on tokens. The researchers don’t have an answer for why this is, but they suspect that the reasoning doesn’t scale.

  • MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    12 minutes ago

    It’s not just the memorization of patterns that matters, it’s the recall of appropriate patterns on demand. Call it what you will, even if AI is just a better librarian for search work, that’s value - that’s the new Google.

    • cactopuses@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 minutes ago

      While a fair idea there are two issues with that even still - Hallucinations and the cost of running the models.

      Unfortunately, it take significant compute resources to perform even simple responses, and these responses can be totally made up, but still made to look completely real. It’s gotten much better sure, but blindly trusting these things (Which many people do) can have serious consequences.

  • melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    ·
    55 minutes ago

    It’s all “one instruction at a time” regardless of high processor speeds and words like “intelligent” being bandied about. “Reason” discussions should fall into the same query bucket as “sentience”.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 minutes ago

      My impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.

  • minoscopede@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    edit-2
    6 hours ago

    I see a lot of misunderstandings in the comments 🫤

    This is a pretty important finding for researchers, and it’s not obvious by any means. This finding is not showing a problem with LLMs’ abilities in general. The issue they discovered is specifically for so-called “reasoning models” that iterate on their answer before replying. It might indicate that the training process is not sufficient for true reasoning.

    Most reasoning models are not incentivized to think correctly, and are only rewarded based on their final answer. This research might indicate that’s a flaw that needs to be corrected before models can actually reason.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 hours ago

      When given explicit instructions to follow models failed because they had not seen similar instructions before.

      This paper shows that there is no reasoning in LLMs at all, just extended pattern matching.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 minutes ago

        I’m not trained or paid to reason, I am trained and paid to follow established corporate procedures. On rare occasions my input is sought to improve those procedures, but the vast majority of my time is spent executing tasks governed by a body of (not quite complete, sometimes conflicting) procedural instructions.

        If AI can execute those procedures as well as, or better than, human employees, I doubt employers will care if it is reasoning or not.

    • REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 hours ago

      What confuses me is that we seemingly keep pushing away what counts as reasoning. Not too long ago, some smart alghoritms or a bunch of instructions for software (if/then) was officially, by definition, software/computer reasoning. Logically, CPUs do it all the time. Suddenly, when AI is doing that with pattern recognition, memory and even more advanced alghoritms, it’s no longer reasoning? I feel like at this point a more relevant question is “What exactly is reasoning?”. Before you answer, understand that most humans seemingly live by pattern recognition, not reasoning.

      https://en.wikipedia.org/wiki/Reasoning_system

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 minutes ago

        I think as we approach the uncanny valley of machine intelligence, it’s no longer a cute cartoon but a menacing creepy not-quite imitation of ourselves.

    • Tobberone@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      What statistical method do you base that claim on? The results presented match expectations given that Markov chains are still the basis of inference. What magic juice is added to “reasoning models” that allow them to break free of the inherent boundaries of the statistical methods they are based on?

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      5 hours ago

      Yeah these comments have the three hallmarks of Lemmy:

      • AI is just autocomplete mantras.
      • Apple is always synonymous with bad and dumb.
      • Rare pockets of really thoughtful comments.

      Thanks for being at least the latter.

    • Zacryon@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      Some AI researchers found it obvious as well, in terms of they’ve suspected it and had some indications. But it’s good to see more data on this to affirm this assessment.

      • kreskin@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        3 hours ago

        Lots of us who has done some time in search and relevancy early on knew ML was always largely breathless overhyped marketing. It was endless buzzwords and misframing from the start, but it raised our salaries. Anything that exec doesnt understand is profitable and worth doing.

  • Xatolos@reddthat.com
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    5 hours ago

    So, what your saying here is that the A in AI actually stands for artificial, and it’s not really intelligent and reasoning.

    Huh.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    8 hours ago

    What’s hilarious/sad is the response to this article over on reddit’s “singularity” sub, in which all the top comments are people who’ve obviously never got all the way through a research paper in their lives all trashing Apple and claiming their researchers don’t understand AI or “reasoning”. It’s a weird cult.

  • RampantParanoia2365@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    4
    ·
    edit-2
    6 hours ago

    Fucking obviously. Until Data’s positronic brains becomes reality, AI is not actual intelligence.

    AI is not A I. I should make that a tshirt.

  • Communist@lemmy.frozeninferno.xyz
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    edit-2
    8 hours ago

    I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.

    do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.

    if someone can objectively answer “no” to that, the bubble collapses.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      do we know that they don’t and are incapable of reasoning.

      “even when we provide the algorithm in the prompt—so that the model only needs to execute the prescribed steps—performance does not improve”

  • mavu@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    60
    arrow-down
    6
    ·
    16 hours ago

    No way!

    Statistical Language models don’t reason?

    But OpenAI, robots taking over!

  • GaMEChld@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    20
    ·
    14 hours ago

    Most humans don’t reason. They just parrot shit too. The design is very human.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      8 hours ago

      I hate this analogy. As a throwaway whimsical quip it’d be fine, but it’s specious enough that I keep seeing it used earnestly by people who think that LLMs are in any way sentient or conscious, so it’s lowered my tolerance for it as a topic even if you did intend it flippantly.

    • joel_feila@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      9 hours ago

      Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive

    • El Barto@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      3
      ·
      13 hours ago

      LLMs deal with tokens. Essentially, predicting a series of bytes.

      Humans do much, much, much, much, much, much, much more than that.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      7
      ·
      13 hours ago

      Yeah I’ve always said the the flaw in Turing’s Imitation Game concept is that if an AI was indistinguishable from a human it wouldn’t prove it’s intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

      • jnod4@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 hours ago

        I think that person had to choose between the drugs or hard core prison of the 1950s England where being a bit odd was enough to guarantee an incredibly difficult time as they say in England, I would’ve chosen the drugs as well hoping they would fix me, too bad without testosterone you’re going to be suicidal and depressed, I’d rather choose to keep my hair than to be horny all the time

      • crunchy@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        12 hours ago

        I’ve heard something along the lines of, “it’s not when computers can pass the Turing Test, it’s when they start failing it on purpose that’s the real problem.”

      • Zenith@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        11 hours ago

        Yeah we’re so stupid we’ve figured out advanced maths, physics, built incredible skyscrapers and the LHC, we may as individuals be less or more intelligent but humans as a whole are incredibly intelligent

  • Nanook@lemm.ee
    link
    fedilink
    English
    arrow-up
    223
    arrow-down
    11
    ·
    1 day ago

    lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

    • Clent@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      15 hours ago

      Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

      • kadup@lemmy.world
        link
        fedilink
        English
        arrow-up
        48
        arrow-down
        7
        ·
        23 hours ago

        Apple is significantly behind and arrived late to the whole AI hype, so of course it’s in their absolute best interest to keep showing how LLMs aren’t special or amazingly revolutionary.

        They’re not wrong, but the motivation is also pretty clear.

        • Venator@lemmy.nz
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 hours ago

          Apple always arrives late to any new tech, doesn’t mean they haven’t been working on it behind the scenes for just as long though…

        • Optional@lemmy.world
          link
          fedilink
          English
          arrow-up
          24
          arrow-down
          2
          ·
          20 hours ago

          “Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.

        • MCasq_qsaCJ_234@lemmy.zip
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          2
          ·
          22 hours ago

          They need to convince investors that this delay wasn’t due to incompetence. The problem will only be somewhat effective as long as there isn’t an innovation that makes AI more effective.

          If that happens, Apple shareholders will, at best, ask the company to increase investment in that area or, at worst, to restructure the company, which could also mean a change in CEO.

        • dubyakay@lemmy.ca
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          23 hours ago

          Maybe they are so far behind because they jumped on the same train but then failed at achieving what they wanted based on the claims. And then they started digging around.

          • Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            2
            ·
            15 hours ago

            Yes, Apple haters can’t admit nor understand it but Apple doesn’t do pseudo-tech.

            They may do silly things, they may love their 100% mark up but it’s all real technology.

            The AI pushers or today are akin to the pushers of paranormal phenomenon from a century ago. These pushers want us to believe, need us to believe it so they can get us addicted and extract value from our very existence.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      21
      ·
      edit-2
      24 hours ago

      "It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’." -Pamela McCorduck´.
      It’s called the AI Effect.

      As Larry Tesler puts it, “AI is whatever hasn’t been done yet.”.

      • kadup@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        23 hours ago

        That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they’re clearly not thinking when making a move - even if we use the most open biological definitions for thinking.

        • cyd@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          10 hours ago

          By that metric, you can argue Kasparov isn’t thinking during chess, either. A lot of human chess “thinking” is recalling memorized openings, evaluating positions many moves deep, and other tasks that map to what a chess engine does. Of course Kasparov is thinking, but then you have to conclude that the AI is thinking too. Thinking isn’t a magic process, nor is it tightly coupled to human-like brain processes as we like to think.

          • kadup@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 hour ago

            By that metric, you can argue Kasparov isn’t thinking during chess

            Kasparov’s thinking fits pretty much all biological definitions of thinking. Which is the entire point.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          5
          ·
          22 hours ago

          No, it shows how certain people misunderstand the meaning of the word.

          You have called npcs in video games “AI” for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

          • Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            15 hours ago

            Intellegence has a very clear definition.

            It’s requires the ability to acquire knowledge, understand knowledge and use knowledge.

            No one has been able to create an system that can understand knowledge, therefor me none of it is artificial intelligence. Each generation is merely more and more complex knowledge models. Useful in many ways but never intelligent.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              Dog has a very clear definition, so when you call a sausage in a bun a “Hot Dog”, you are actually a fool.

              Smart has a very clear definition, so no, you do not have a “Smart Phone” in your pocket.

              Also, that is not the definition of intelligence. But the crux of the issue is that you are making up a definition for AI that suits your needs.

            • 8uurg@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              Wouldn’t the algorithm that creates these models in the first place fit the bill? Given that it takes a bunch of text data, and manages to organize this in such a fashion that the resulting model can combine knowledge from pieces of text, I would argue so.

              What is understanding knowledge anyways? Wouldn’t humans not fit the bill either, given that for most of our knowledge we do not know why it is the way it is, or even had rules that were - in hindsight - incorrect?

              If a model is more capable of solving a problem than an average human being, isn’t it, in its own way, some form of intelligent? And, to take things to the utter extreme, wouldn’t evolution itself be intelligent, given that it causes intelligent behavior to emerge, for example, viruses adapting to external threats? What about an (iterative) optimization algorithm that finds solutions that no human would be able to find?

              Intellegence has a very clear definition.

              I would disagree, it is probably one of the most hard to define things out there, which has changed greatly with time, and is core to the study of philosophy. Every time a being or thing fits a definition of intelligent, the definition often altered to exclude, as has been done many times.

          • technocrit@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            7
            ·
            edit-2
            22 hours ago

            Who is “you”?

            Just because some dummies supposedly think that NPCs are “AI”, that doesn’t make it so. I don’t consider checkers to be a litmus test for “intelligence”.

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              10
              arrow-down
              2
              ·
              21 hours ago

              “You” applies to anyone that doesnt understand what AI means. It’s a portmanteau word for a lot of things.

              Npcs ARE AI. AI doesnt mean “human level intelligence” and never did. Read the wiki if you need help understanding.

      • vala@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        edit-2
        19 hours ago

        Yesterday I asked an LLM “how much energy is stored in a grand piano?” It responded with saying there is no energy stored in a grad piano because it doesn’t have a battery.

        Any reasoning human would have understood that question to be referring to the tension in the strings.

        Another example is asking “does lime cause kidney stones?”. It didn’t assume I mean lime the mineral and went with lime the citrus fruit instead.

        Once again a reasoning human would assume the question is about the mineral.

        Ask these questions again in a slightly different way and you might get a correct answer, but it won’t be because the LLM was thinking.

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          8
          ·
          16 hours ago

          I’m not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I’d expect someone asking about kidney stones would also be asking about foods that are commonly consumed.

          This kind of just goes to show there’s multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely “good” at making up answers to leading questions, even if it’s completely false.

          • Knock_Knock_Lemmy_In@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            A well trained model should consider both types of lime. Failure is likely down to temperature and other model settings. This is not a measure of intelligence.

          • JohnEdwa@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 hours ago

            Making up answers is kinda their entire purpose. LMMs are fundamentally just a text generation algorithm, they are designed to produce text that looks like it could have been written by a human. Which they are amazing at, especially when you start taking into account how many paragraphs of instructions you can give them, and they tend to rather successfully follow.

            The one thing they can’t do is verify if what they are talking about is true as it’s all just slapping words together using probabilities. If they could, they would stop being LLMs and start being AGIs.

        • postmateDumbass@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          17 hours ago

          Honestly, i thought about the chemical energy in the materials constructing the piano and what energy burning it would release.

          • xthexder@l.sw0.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            16 hours ago

            The tension of the strings would actually be a pretty miniscule amount of energy too, since there’s very little stretch to a piano wire, the force might be high, but the potential energy/work done to tension the wire is low (done by hand with a wrench).

            Compared to burning a piece of wood, which would release orders of magnitude more energy.

        • antonim@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          18 hours ago

          But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        8
        ·
        edit-2
        22 hours ago

        I’m going to write a program to play tic-tac-toe. If y’all don’t think it’s “AI”, then you’re just haters. Nothing will ever be good enough for y’all. You want scientific evidence of intelligence?!?! I can’t even define intelligence so take that! \s

        Seriously tho. This person is arguing that a checkers program is “AI”. It kinda demonstrates the loooong history of this grift.

        • JohnEdwa@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          2
          ·
          edit-2
          22 hours ago

          It is. And has always been. “Artificial Intelligence” doesn’t mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it’s a vast field of research in computer science with many, many things under it.

          • Endmaker@ani.social
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            4
            ·
            21 hours ago

            ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

            Y’all are too patient. I can’t be bothered to spend the time to give people free lessons.

            • antonim@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              17 hours ago

              Wow, I would deeply apologise on the behalf of all of us uneducated proles having opinions on stuff that we’re bombarded with daily through the media.

            • Clent@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              14 hours ago

              The computer science industry isn’t the authority on artificial intelligence it thinks it is. The industry is driven by a level of hubris that causes people to step beyond the bounds of science and into the realm of humanities without acknowledgment.

        • LandedGentry@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          22 hours ago

          Yeah that’s exactly what I took from the above comment as well.

          I have a pretty simple bar: until we’re debating the ethics of turning it off or otherwise giving it rights, it isn’t intelligent. No it’s not scientific, but it’s a hell of a lot more consistent than what all the AI evangelists espouse. And frankly if we’re talking about the ethics of how to treat something we consider intelligent, we have to go beyond pure scientific benchmarks anyway. It becomes a philosophy/ethics discussion.

          Like crypto it has become a pseudo religion. Challenges to dogma and orthodoxy are shouted down, the non-believers are not welcome to critique it.

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      26
      ·
      1 day ago

      This is why I say these articles are so similar to how right wing media covers issues about immigrants.

      There’s some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They’re taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There’s articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.

      Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        edit-2
        22 hours ago

        This is why I say these articles are so similar to how right wing media covers issues about immigrants.

        Maybe the actual problem is people who equate computer programs with people.

        Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

        You mean laws like this? jfc.

        https://www.inc.com/sam-blum/trumps-budget-would-ban-states-from-regulating-ai-for-10-years-why-that-could-be-a-problem-for-everyday-americans/91198975

        • Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          20 hours ago

          Literally what I’m talking about. They have been pushing anti AI propaganda to alienate the left from embracing it while the right embraces it. You have such a blind spot you this, you can’t even see you’re making my argument for me.

          • antonim@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            17 hours ago

            That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that’s actually supposed to mean).

            • Melvin_Ferd@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              17 hours ago

              What isn’t there to gain?

              Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

              We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

              Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

              Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

              • antonim@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                14 hours ago

                I have no idea what sort of AI you’ve used that could do any of this stuff you’ve listed. A program that doesn’t reason won’t expose logical fallacies with any rigour or refine anyone’s ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it’s completely divorced from how the stuff as it is currently works.

                Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

                That’s a misguided view of how art is created. Supposed “brilliant ideas” are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don’t make it visual, write a story or an essay.

                Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

                For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

                • Melvin_Ferd@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  5
                  ·
                  edit-2
                  14 hours ago

                  Here is chatgpt doing what you said it can’t. Finding all the logical fallacies in what you write:

                  You’re raising strong criticisms, and it’s worth unpacking them carefully. Let’s go through your argument and see if there are any logical fallacies or flawed reasoning.


                  1. Straw Man Fallacy

                  “Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept.”

                  This misrepresents the original claim:

                  “AI can help create a framework at the very least so they can get their ideas down.”

                  The original point wasn’t that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


                  1. False Dichotomy

                  “If you are not competent in a visual medium, then don’t make it visual, write a story or an essay.”

                  This suggests a binary: either you’re competent at visual art or you shouldn’t try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


                  1. Hasty Generalization

                  “Supposed ‘brilliant ideas’ are a dime a dozen…”

                  While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn’t invalidate the potential value of enabling more people to test theirs.


                  1. Appeal to Ridicule / Ad Hominem (Light)

                  “…result in a boring comic…” / “…just bad (look at SMBC or xkcd or…)”

                  Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn’t really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That’s not a logical fallacy in the strictest sense, but it’s rhetorically weak.


                  1. Tu Quoque / Whataboutism (Borderline)

                  “For now I see no particular benefits that the right-wing has obtained by using AI either…”

                  This seems like a rebuttal to a point that wasn’t made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


                  Summary of Fallacies Identified:

                  Type Description

                  Straw Man Misrepresents the role of AI in creative assistance. False Dichotomy Assumes one must either be visually skilled or not attempt visual media. Hasty Generalization Devalues “brilliant ideas” universally. Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis. Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


                  Your criticism is thoughtful and not without merit—but it’s wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

                  At this point you’re just arguing for arguments sake. You’re not wrong or right but instead muddying things. Saying it’ll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.

      • hansolo@lemmy.today
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        10
        ·
        24 hours ago

        Because it’s a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won’t kill us all is the hard part.

        I’m a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven’t been nuked by SkyNet, all of this will look quaint and silly.