• NotANumber@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    I don’t trust OpenAI and try to avoid using them. That being said they have always been one of the more careful ones regarding safety and alignment.

    I also don’t need you or openai to tell me that hallucinations are inevitable. Here have a read of this:

    Title: Hallucination is Inevitable: An Innate Limitation of Large Language Models, Author: Xu et al., Date: 2025-02-13, url: http://arxiv.org/abs/2401.11817

    Regarding resource usage: this is why open weights models like those made by the Chinese labs or mistral in Europe are better. Much more efficient and frankly more innovative than whatever OpenAI is doing.

    Ultimately though you can’t just blame LLMs for people committing suicide. It’s a lazy excuse to avoid addressing real problems like how treats neurodivergent people. The same problems that lead to radicalization including incels and neo nazis. These have all been happening before LLM chatbots took off.

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      22 hours ago

      Ultimately though you can’t just blame LLMs for people committing suicide.

      well that settles it then! you’re apparently such an authority.

      pfft.

      meanwhile here in reality the lawsuits and the victims will continue to pile up. and your own admitted attempts to make it safer - maybe that’ll stop the LLM associated tragedies.

      maybe. pfft.