• danciestlobster@lemmy.zip
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    9 days ago

    Even for people who generally like the function of AI (which seem to be fairly rare here) the absolutely obscene climate impact and implications for peopes jobs and livelihoods, privacy breaches, and general internet enshittification is surely reason enough to be against it.

    • Hanrahan@slrpnk.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 days ago

      The jobs thing i don’t understand, its the distribution of productivity gains that’s the issue, why we keep voting for the same politicians ensuring it goes to the wealthy is the real mystery.

      • danciestlobster@lemmy.zip
        link
        fedilink
        arrow-up
        5
        ·
        9 days ago

        Oh, I absolutely agree. But currently, the people in charge of making those decisions have demonstrated moral bankruptcy and will absolutely ensure the productivity gains funnel to the top. Until that changes, AI impact on jobs will likely be devastating.

        And I’m all for changing it. It’s just going to be a long and/or violent process.

    • iceberg314@midwest.social
      link
      fedilink
      arrow-up
      5
      ·
      9 days ago

      That I why I like small, specialized, locally hosted AI. Runs acceptably fast and quite on my gaming PC, it’s private, and I can give it knowledge is small doses in specific topics and projects.

      • ctrl_alt_esc@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        8 days ago

        Which model do you use and what are your specs? I ran a couple using an RTX5060 with 16gb and it’s too slow to be usable for larger models while the smaller ones are mostly useless.

        • iceberg314@midwest.social
          link
          fedilink
          arrow-up
          1
          ·
          8 days ago

          I also have a 5060 (ti) with 16GB of RAM. I tend to use GPT-OSS:20B or Qwen3:14B with a context of ~30k. I have custom system prompt for my style of reponse I like on open web ui. That takes up about 14GB of my 16GB VRAM

          But yeah it is slower and not as “smart” as the cloud based models, but I think the inconvenience of the speed and having to fact check/test code is worth the privacy and environmental trade offs

          • Hexarei@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            8 days ago

            Ive had good success on similar hardware (5070 + more ram) with GLM-4.7-Flash, using llama.cpp’s --cpu-moe flag - I can get up to 150k context with it at 20ish tok/sec. I’ve found it to be a lot better for agentic use than GPT-OSS as well, it seems to do a much more in depth reasoning effort, so while it spends more tokens it seems worth it for the end result.

    • errer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      It has its uses but it feels like more of a 10-20% productivity boost when used effectively, not the 500%, “lets have openclaw replace my whole company!” kind of BS being pushed by AI companies.

      • black0ut@pawb.social
        link
        fedilink
        arrow-up
        1
        ·
        9 days ago

        If it is a productivity boost for you, it is at the cost of someone else who will have to proofread and test everything you do. LLMs (and genAI) are useless.

        • errer@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 days ago

          It’s no more work than proofreading any other code I write. Sounds like someone just slopped out code with an LLM and didn’t do the due diligence of checking it themselves. Using an LLM doesn’t mean no work. I think that’s when people get in trouble.

  • treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    ·
    9 days ago

    Considering the username, I’m just sitting here wondering if we’re just arguing against an LLM.

  • Mothra@mander.xyz
    link
    fedilink
    arrow-up
    12
    ·
    9 days ago

    Reality as an artist dictates that all my work was datamined without my consent and anything I post in the future, should I choose to do so, will. And the end result of this data mining is to drive artists like me out of business. I don’t mind the average Joe getting their anime girl with three titties in five minutes, but company owners are making money out of this and paying nobody for their source material.

  • Ryoae@piefed.social
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    9 days ago

    A lot of people even outside, who are not techbros and corporate out of touch zealots, don’t like AI. It is being treated as a solve-all solution for everyday problems. When, it is horribly doing its job, gets in the way, artificially messes up anything in reach.

    • frank@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      9 days ago

      Yup. I suspect on other social media that some of the positive sentiment towards AI is just astroturfing.

  • unwarlikeExtortion@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    8 days ago

    Yeah, Lemmy is a bit over-the-top anti-AI, but most of it is based in reality.

    There are a bunch of problems with AI. And they outbnumer any good ones by a mile.

    The main cause of that fact is the entire AI bubble.

    AI wastes a fuckton of energy. Of course, this energy isn’t free: communities pay. Electricity demand goes up, and so does price. Then, most electricity isn’t green. And on top of that, the rise in demand causes more electricity peaks, which almost exclusively get “fixed” through fossil fuel-based methods.

    From another angle, AI disrupts markets. And not in a good way. Companies dump millions into AI while neglecting their employees (who get laid off because AI “can replace” them), and their customers as well (since instead of doing useful stuff for consumers they pump out AI-branded bullshit no one wants or needs).

    Then, big AI companies spit in the face of copyright and have the audacity to turn around and claim copyright on their models’ outputs. If inputs are free game, so are the outputs. Copyright is a very vague, misunderstood and misused term, and no argument I’ve heard claiming feeding stuff into AI is fair use was grounded in reality.

    That all veing said, AI is here to stay. I’ve been thinking long and hard about similar fundamental changes to how human society functions, and I think i found one. Photography.

    Way back when, you had to do things painstakibgly by hand. Drawing, copying books by hand, etc.

    Then the printing press came. Revolutionary? Sure. But not as revolutionary as photography. Instead of writing by hand, you had to typeset by hand before printing. This made the process scalable, but it was still painstaking work.

    But photography is a different matter. You just have to make (or buy) a camera and other required supplies (film, developing media, etc), and then you merely have to set up the camera, take the photo, develop the film, and make the photo.

    Even in the early days of photography, while these processes took some time, it wasn’t painstaking. To take a photo, you set up the camera, and wait. To develop film, you dunk the film into a chemical bath, and wait. To transfer the image onto paper - a similar ordeal. Set, forget.

    Photography fundamentally changed how the entirety of society works. Painters complained and lost jobs and livelihoods - like the “jobs stolen” by AI. Instead of drawing stuff, which required a lot of skill, taking a photo is much simpler (abd faster).

    Yesterday, instead of having to paint stuff, you’d take a photo. Today, instead of taking a photo, you ask AI.

    On the copyright front, the paralels are obvious: Taking a photo of a book is fair use. But photocopying a book isn’t. The problem with AI is that it does some transformations to the original, so it’s obfuscated inside the model. But the obfuscation can be undone, as AI often happily spits out certain inputs verbatim when asked. Take a photo of a page - okay. Photocopy the entire book? Not okay.

    The situation is the same when we look at artwork instead of books. Taking a photo of an artwork in a museum is okay. Scanning an artwork (duplicating it verbatim) - isn’t. Same for movies. A frame is probably gonna be okay. The entire movie - won’t.

    Going by the closest analogue, there is absolutely no justification to indiscriminately feed everything and anything into AI, for indiscriminately photocopying and vervatim copying the same material is clearly protected.

  • Xaphanos@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 days ago

    I am not “against” AI. I am against unfettered capitalism and how it is poisoning humanity. AI can hold the same kind of promise that Internet v1 had before the first eternal September. But because of the “success” of the capitalization of the web, folks are flocking to AI on the assumption that something similar will happen to it. I see it as a gold rush. Some boom towns may happen along the way. Some may endure. But it’s still very early to know that.

  • Fizz@lemmy.nz
    link
    fedilink
    arrow-up
    5
    ·
    9 days ago

    I’m pissed at how its able to license wash Foss code and peoples IP. But it seems there are no rules for American or Chinese tech companies because they refuse to legislate so ip should be completely removed. There is no way any of their IP should be respected.

  • WoodScientist@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    9 days ago

    People come to Lemmy precisely because they’re tired of big algorithmic corporate platforms. They come here precisely to get away from AI slop on platforms like Facebook. Hell, half the people here have been banned from reddit based on comically flawed algorithmic AI moderation tools. This platform is heavily selected for people who dislike AI and AI content.

    • Witty Computer@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      9
      ·
      9 days ago

      Although I agree with the algorithmic abuses with AI, I didn’t expect a group think to be so prevalent, especially in a tech-leaning group. I don’t mind being popular, I guess the lack of AI might work to my advantage here.

      • rishado@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        9 days ago

        Just because you have an unpopular opinion in actual tech leaning groups doesn’t mean it’s group think. It means your opinion sucks

      • NorthWestWind@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        9 days ago

        Being tech-leaning is exactly why we are against AI. We are just much more aware of the resource it’s consuming, the privacy it’s infringing, and the content it’s stealing.

        • Witty Computer@feddit.orgOP
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          12
          ·
          9 days ago

          No disrespect, but with that attitude you won’t be tech-leaning for long. I understand where you’re coming from, just the “We are” sounds a bit culty and I really dislike cults.

          • black0ut@pawb.social
            link
            fedilink
            arrow-up
            7
            ·
            9 days ago

            The issue with “tech-leaning” people who believe AI is the future is that they’re in the “peak of mount stupid” part of the Dunning-Kruger curve. Once you get past that, you realize AI was never good at anything and it’s harmful to everyone in a million different ways. Most of lemmy’s tech-leaning people have already realized that, and are actively trying to avoid AI.

            Graph of the Dunning-Kruger effect, where you can see a curve displaying confidence on the vertical axis and knowledge on the horizontal axis. At low knowledge, confidence peaks, and that part is labelled "peak of mount stupid". After that, with more knowledge, confidence goes down and is labelled "valley of despair". Finally, confidence starts to grow very gradually when approaching high knowledge, and this part is labelled "slope of enlightenment".

  • Angryhumanoid@fedinsfw.app
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    9 days ago

    I have been working with LLMs for decades. I know what they can do and what they can’t. I admit they have grown in leaps and bounds in the last few years because of the hype, but therein lies the issue: there is still way too much hype, it’s not the end all solution some think it is, it’s driving up hardware prices, the environmental impact is horrendous, and it’s a new bullshit business marketing term that serves only to artificially inflate stock prices. “Agentic” is the new “data driven”.

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    9 days ago

    I’m against the LLM bubble. They’re gobbling up all of our compute, electricity, water, and basically all investment capital while not even generating productivity gains or improving anyone’s lives. Internet search is now dead, all my fan communities are just full of slop instead of art from artists, and the piggies that own the data centers are destroying all culture to feed their autocomplete machines. LLMs have accelerated the decay of civilization in a way that we might struggle to recover from when the bubble pops. Half the time it’s not even AI, the real work is just outsourced to some superexploited workers in the Global South.

    There are some legitimate use-cases for LLM technology, but the way they’re trying to cram it into everything is actually just wrecking everything. It seems like they’re destroying the world for a worse calculator that can pretend to be your girlfriend.

  • undrwater@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    9 days ago

    A tool becomes “good” or “bad” based on its implementation.

    The current trend towards massive unsustainable data centers is pretty objectively “bad” for humans and other creatures for questionable benefit.

    Localized AI, on the other hand, would be less harmful, and more useful. This would move the needle towards a more objective “good”.

    • Witty Computer@feddit.orgOP
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      8
      ·
      9 days ago

      Then be an activist. Never pay for AI, I don’t. Maybe 30 dollars a year for tools I can’t do without, but take everything for free. Make the unsustainability fall due to its own weight. That doesn’t mean I don’t use AI every day for work, spirituality, and learning. Take advantage of what you have available. Group think sucks.

        • AngryMob@lemmy.one
          link
          fedilink
          arrow-up
          1
          ·
          8 days ago

          Not op, but i assume they just mean talking to an llm about spiritual topics. Especially local ai, if you pick a less censored model so it doesnt constantly spit refusals, they can hold conversations on difficult topics pretty well now. Whether that is politics, religion, law, finance, medicine, or roleplaying porn, doesnt really matter to the llm.

          • lividweasel@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            9 days ago

            This you?

            Never pay for AI, I don’t. Maybe 30 dollars a year for tools I can’t do without, but take everything for free.

            Maybe you were referring to non-AI tools, though the mention of that here would be unusual, so the most likely reading of this is that you were saying something like “I don’t pay for AI, except when I do”.

            • Witty Computer@feddit.orgOP
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              8
              ·
              9 days ago

              I see where you’re coming from… When there is no way of doing something without AI I take on a job and pay peanuts for AI. I choose to earn money over not paying for AI. I can totally live without it, it’s just work is preferrable than a tantrum.

  • fodor@lemmy.zip
    link
    fedilink
    arrow-up
    2
    ·
    9 days ago

    Many people here know that “AI” as a term is pure snake oil. You aren’t actually talking about anything until you say what you think it means, or specific examples.

    AI research goes back to the early 1950s. Being “against” all of that old research is kinda meaningless… So it’s your job to clarify what you mean, or not, and other users will respond accordingly.

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    9 days ago

    I think there is a lot of misdirected frustration. The technology isn’t the issue, the way it’s been implemented is the issue. There are some useful use cases for AI.