• nevemsenki@lemmy.world
    link
    fedilink
    English
    arrow-up
    112
    arrow-down
    13
    ·
    1 year ago

    Can’t wait for this AI bubble to fizzle. It’s the blockchain insanity all over again.

    • danielbln@lemmy.world
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      10
      ·
      1 year ago

      Maybe, but gen AI produces actually useful, tractable results. That’s already heaps more than crypto, which is just techno gambling

      • nevemsenki@lemmy.world
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        4
        ·
        1 year ago

        As long as LLM AI models are prone to hallucinating and there is no way to audit how they derive results (eg to verify accuracy), relying on them will have roadblocks/limitations. Once they solve this issue though, that will be a whole different story, I agree. As for other AIs such as image or video generation, I don’t have enough experience to tell…

        • danielbln@lemmy.world
          link
          fedilink
          English
          arrow-up
          23
          arrow-down
          4
          ·
          1 year ago

          Hallucinations can be heavily reduced today by providing the LLM with grounding truth. People use naked LLMs as knowledge databases, which is prone to hallucinations indeed. However, provide them with verified data from the side and they are very, very good at keeping to the truth. I know, because we deploy these with clients to great avail.

          Image, music, video models are making great strides and are already part of various pipelines, all the way up to the big boy tools like Photoshop (generative fill, for example).

          The tech is being incorporated at a large scale by a lot of companies, from SME to megacorp. I don’t see it going away any time soon, even if it doesn’t improve from here on out (which it undoubtedly will).

          • BastingChemina@slrpnk.net
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            The issue is that there are from time to time they still confidently hallucinate and there is no way to detect if they are right or not.

            • krakenx@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              Hire 1 person to verify AI output instead of a dozen to make the content. If that one editor misses something, who cares when we live in a post-truth society where the media lies on purpose.

            • QuaternionsRock@lemmy.world
              link
              fedilink
              English
              arrow-up
              9
              arrow-down
              3
              ·
              edit-2
              1 year ago

              GPT-4:

              In Africa, there are three countries that start with the letter “K”:

              1. Kenya
              2. Kingdom of Eswatini (although it’s often referred to simply as Eswatini)
              3. Kiribati

              However, it’s worth noting that Kiribati is not in Africa; it’s a Pacific island nation. So, only Kenya and the Kingdom of Eswatini in Africa start with the letter “K”, but most people just refer to Eswatini without the “Kingdom” prefix. If you meant countries solely with the prominent “K” at the start, then it’s just Kenya.

              Anecdotal evidence is useless because it can be contradicted with anecdotal evidence.

        • rambaroo@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          Hallucinations aren’t the only issue with LLMs, they also have a limited amount of context they can recall and that problem won’t go away.

          • SCB@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            that problem won’t go away

            That problem is very much being worked on

        • Cheers@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 year ago

          This mentality is the same as saying Wikipedia is not a source. I agree, there are better regulations that need to be implemented, but the speed at which these things are being churned out and tuned is mind blowing. It’s like if Wikipedia started branching off into sub categories that each have their own specialty, which can be more easily moderated, and then folded back into the more general system.

    • JDubbleu@programming.dev
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      8
      ·
      1 year ago

      It’s not quite blockchain. It is incredibly useful in a broad range of applications, and has genuinely changed how millions of people work. Sure it’s not the magic bullet wall street thinks it is, but my work has been improved immensely through the use of generative AI. Especially with uniquely challenging software problems and niche questions.

      I think it’ll be similar to VR. Extremely useful and interesting, but over-hyped and not going to penetrate our lives as much as most people think.

      • danielbln@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        1 year ago

        My mom never used VR, but she happily talks to GPT4. From that perspective I think mindshare in the broader population will be significantly higher than VR (even if it doesn’t live up to the hype VC/Wallstreet machine).

            • R00bot@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I’m just saying the accessibility of AI doesn’t necessarily mean it has more utility. Just that it’s more accessible.

          • danielbln@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Even if I would gift one to her she wouldn’t use it. VR headset is peak nerd shit, as much as I love it. Having a dialog with an AI is much more approachable to the layman.

            • R00bot@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              That’s fair I guess. I don’t have a VR headset or talk to AI so imo they’re both pretty nerdy. I only talk to chatGPT every now and then to see if it can help me with code problems, and it almost always fails spectacularly unless I’m doing something really basic.

              • danielbln@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I work as a systems engineer and use it daily. I feel there is a particular way of using it where it really shines. Priming it with “you are an experienced senior python/rust/etc. developer who writes robust, idiomatic and maintainable code”. Using GPT-4 (not 3.5) is paramount, and the Data Analysis mode on ChatGPT is also really useful, because GPT can actually run code to validate things.

                Noone should force it of course, but I feel once you get intuition about what and how it does things well (and when it falls on its face) then it really flies.

    • NumbersCanBeFun@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      4
      ·
      1 year ago

      I actually made a ton of money riding the hype train on a lot of those shit coins 🤣

      I got burned a few times but at that point I was already playing with profit and not really gambling with my own money.

      Key word here: GAMBLE

  • flop_leash_973@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    1 year ago

    They did the same thing with “blockchain”, “NFTs”, and other largely vaporware crypto junk last year.

    They are just riding the hype wave hoping to cash out before the bottom falls out of it like always.

      • flop_leash_973@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Not in total. But I also don’t think it is the king maker these investors are making it out to be. Just like Crypto, it is a tool that can enhance and improve things when applied to the right things in the right ways. It is not some magic bullet that is the easy button that the investor snake that is currently eating its tail speculating over it would have everyone believe.

  • Nora@lemmy.ml
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    2
    ·
    1 year ago

    Those rich fuckers are putting a lot of bets on AI. They’re keeping us busy over working struggling to pay to exist, while they perfect our replacement so that they can be rid of us.

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Getting rid of us doesn’t make sense. We circulate the money. They need us to generate the things that we then buy. Without that they’d need to actually spend money and they won’t.

      • SCB@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Blows my mind that hard-left people would resist AI since it both allows far more opportunity for direct control of means of production (as it is a time/scale equalizer) and also is the best path toward UBI.

        UBI sounds silly to some, but that changes when there’s a hundred million people out of work. Especially since AI will carve out from the middle, not from the bottom. That’s a LOT of reliably-voting spending power eroding.

        • atrielienz@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I can understand that people don’t trust corps or their governments to enact UBI or use AI responsibly. They have every reason to believe that given the state of consumer protection laws, privacy laws, and the cost of things that people need like healthcare, food, and housing.

  • Smacks@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    1 year ago

    AI is another one of those tech fabs that’ll fade away, but not like blockchain and NFTs. There’s money to be made because people are excited, but it’s use-cases are much more complicated.

    AI is a fantastic tool for creators, including programmers with Copilot. But it isn’t a full-blown replacement for workers quite yet. A lot of capitalists are really excited to slash their workforce in half, sure, but they’re utterly ignoring the true potential of AI. It’s a tool, not a replacement (yet).

    • hubobes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Those capitalists also do not understand that if these tools can replace workers everyone can, through FOSS projects, own these tools and let them work for themselves.

      • SCB@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        This is an extremely good thing for investors.

        Very successful small companies almost always seek capital.

    • AngryCommieKender@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 year ago

      If they really wanted to slash their workforce, middle management has been able to be automated for over a decade. They don’t want to fire their friends and their kids.

      • TheMurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        1 year ago

        Middle management doesn’t work for you. They work for your boss, and they make sure that the bosses are not bothered by you.

        They do exactly what they are meant to do.

        • AngryCommieKender@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          There are completely automated tools that can handle that. I don’t work for anyone, and haven’t for quite a while. People used to work for me, but I prefer working alone. I automated my latest businesses so that I don’t have to deal with the public, or employees. It’s lovely.

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 year ago

      When did we “have the chance”? You seriously believe Occupy had the “chance” to destroy Wall Street? I’m down with the spirit of it but please tell me you don’t genuinely believe this was possible.

      Putting aside the ethics of destroying the thing that most people’s retirement is tied too without any workable alternative in place (among countless other negative consequences for the average person), you understand the buildings aren’t horcruxes, right?

    • weedazz@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      1 year ago

      Lol like physically burning it down would have affected anything. This is the same simplistic logic as a wall keeping out illegal immigrants

      • hiddengoat@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        7
        ·
        1 year ago

        I’m pretty sure the thousands of deaths of major traders would have had some impact. You know, what with all of the fire and burning and melting of human flesh.

        But nah, let’s just do some chanting bro. Totally works, right?

        • Coreidan@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Then get out there and start burning shit.

          Crying about it on lemmy just makes you look like a huge turd and hypocrite.

  • kandoh@reddthat.com
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    4
    ·
    1 year ago

    AI is great, I had gotten tired of Shutterstock and needed something to replace stock images.

    Not sure it’s good for much else though.

    • matthewc@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      3
      ·
      1 year ago

      We are in the infancy of generative AI. For you it has already replaced an entire sector of the workforce: artists. For others it has replaced them wholesale. For others it just assists. Hollywood was trying to legally own actors voices and likenesses to replace them.

      This technology is not standing still. It will be great at a lot of things in the future. It could be next month. It could be next year. It could be in a decade. Whenever it arrives for your job it will be cheaper than you. There will be no going backward on this technology.

      • Semi-Hemi-Demigod@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        1 year ago

        I totally agree that we’re just scratching the surface of what AI can do. But I don’t think it’s what Wall Street thinks it is. It’s not too terribly difficult to spin up an LLM, which means it’s going to be difficult to set up chokepoints to extract rent.

        Though I bet they’ll get the government’s help with that by regulating AI for “safety.” The big guys won’t have a problem but anyone else will have illegal programs running.

        • lloram239@feddit.de
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I think we might end up with the Microsoft/Apple/Google situation all over again. While it’s easy to build an AI, having to jump between AIs for each and every task is no fun. I think the one that wins the golden goose is the one that manages to build a complete OS with AI at it’s core, i.e. instead of Unix shell, you just have a ChatGPT-like thing sitting there that it can interact with all your data and other software in a save and reliable manner. Basically the computer from StarTrek were you just tell it what you want and it figures out how to get it.

          That others can spin up their own LLM won’t help here, as whoever gets to be the default AI that pops up when you switch on your computer will be the one that has the control and can reek the benefits.

            • lloram239@feddit.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Yes, but whoever overcomes those problems will be the next Microsoft/Apple/Google (or get rich by getting bough by either of them). I think a large paradigm shift in how we do computing is unavoidable, LLMs are way to powerful to just be left as chatbots.

              • nossaquesapao@lemmy.eco.br
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Do you think these problems are solvable, and not inherent characteristics? I don’t know, I expect to see computers with high performant ai modules, but not a full ai computer.

                • lloram239@feddit.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  1 year ago

                  Just have the LLM output verifiable scripts instead of manipulating the data directly itself and have version control for the data so the AI can undo changes. All pretty doable, though maybe tricky to get into old apps.

        • matthewc@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          Exactly. It isn’t hard to spin up an LLM.

          I agree corporations will lobby for a legalized monopoly so they’re able to extract rent.

          Generative AI will only grow to replace more and more labor. Labor is most corporations largest expense. Participating in the economy as labor is how most people make their living.

          If AI replaces labor, regardless of who controls it, it will change the world’s economy by putting most people out of a job.

          • mrnotoriousman@kbin.social
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            I’m really confused by these comments. I work on AI and absolutely hate all the clickbait and marketing simple algorithms as actual AI. But this seems like the pendulum swinging way too hard the other way.

            To put it bluntly - No, it is not simple or trivial to “spin up” an LLM. Unless you want it to be worse than simple chatbots that have already existed for over a decade.

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Especially where image generation is concerned, the infancy part can’t be understated. It’s growing so, so fast. A year ago, people would be dismissing AI art as “you can always tell”, it largely couldn’t do hands, and text was right out. But current cutting edge models can semi-reliably generate imperceptible works, needing only some fairly trivial manual curation to pick the best output. There’s also some models that are now able to do basic text. Just comparing a couple of years worth of progress side by side makes it very clear that it’s advancing rapidly and there’s no signs yet that it’s plateaued.

        The big barrier to image generation, though, is profit. The images that it creates are useful, but current understanding is that they can’t be copyrighted and there’s ongoing legal challenges that make it very murky. I don’t think these companies can stay in business from regular people who’ll pay for some tokens to generate art. They need to be usable by commercial companies, and the legal issues will scare many of those away, at least for now.

        • CoderKat@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          As a dev, I honestly can’t understand that. I probably use regex a dozen times a day. Basic regex is so easy and useful, but describing exactly what you want is so iffy for an AI. The basics of regex are also so easy. It’s not like most people are trying to, say, parse an email address with regex. Most usage is basic, like “extract this consistent pattern from this text” or “remove this (simple) parameter from this function”. It takes me seconds to come up with a working regex in most cases.

          • whoisearth@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            There are many of us, myself included, who struggle with Regex. We are common enough that there is a pretty standard joke among developers that if you have a problem and solve it with Regex, congratulations you now have two problems.

            I appreciate what you’re saying and you’re a god among men if you are proficient in Regex but you’re not the norm.

      • kandoh@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        If you can explain it in a way a graphic designer would understand I would genuinely be interested

  • CoderKat@lemm.ee
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 year ago

    I do think there’s some use for AI in its current form (especially AI art as a tool for developing other works, like movies and video games), but I find it bizarre just how much investors value the current form of AI.

    As cool as I find AI art, I’m not yet sure about it’s commercial viability, given the serious legal issues it’s facing. So why do investors, who are supposed to care about commercial viability, value it so much?

    And for generative text, I have an even more negative stance. My understanding is that the cost to train and run those AIs is ludicrous. Sure, some companies will use it to make blog spam articles or replace their basic support staff with it, but is that really gonna make it profitable?

    And I emphasized “current form” because the current AI is basically just predictive text. It’s severely limited and this is extremely evident if you try to ask even basic math problems. It’s not capable of actual intelligence, which is what has me very skeptical of it on the long term. Maybe these companies will come up with a new, better form of AI. Or maybe they won’t. But it doesn’t seem like “just increase the size of the model” is sustainable nor will frankly get closer to strong(ish?) AI.

    • Kage520@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I haven’t used this, but think about all narrators losing their jobs because so can do it with the click of a button. https://customers.microsoft.com/en-AU/story/1646266241611394912-project-gutenberg-nonprofit-azure-synapse-analytics-azure-ai-services

      That’s a lot of people not on the payroll anymore. No health insurance costs, no vacations. Just using the software.

      Think of a lot of analytics jobs that ai can replace. You ever spend a day or two making a spreadsheet do whatever you need it to? That’s probably a lot of people’s jobs. AI can make those people more efficient (as long as a human checks it later), so companies can fire most of the team. That’s a lot more people off the payroll.

      And there are companies working on general ai. That will replace… So many jobs.

    • SCB@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      And I emphasized “current form” because the current AI is basically just predictive text.

      This is a program I use daily at work. Costs me like $250/year on my budget - literally less than. one hotel stay for a work trip. I spent more on food last trip than this will cost my company.

      https://www.synthesia.io/

      It’s a big step away from “predictive text.” This is the AI revolution in action. There are dozens of products you don’t know about shaking up professions you barely ever think about.

      I don’t have to build a Content Gen team because of this software, probably ever.

      My buddy, meanwhile, is on a team building an “AI” for a major property insurance company to help them sift data. Small changes, incrementally, permeating through the system. That’s strong adoption and worth investment.

    • xenoclast@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Bandwagoning doesn’t require thinking or logic. It’s fomo capitalism

      There’s also no association between the product and it’s value. It’s perceived value only.

  • perviouslyiner@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    Doesn’t generative AI need a whole other layer of technology to become reliable?

    The AI needs to control some domain-specific model (like a poser skeleton for pictures of humans) that enforces the rules for how each modelled concept can actually behave, instead of trying to guess the output directly.