• Truscape@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 天前

        In the usa it’s based on profession - Medical professionals, therapists, and public servants like Teachers are mandated reporters, so if they have been proven to be derelict of duty, they are punished.

        There is no such requirement for private individuals or online service providers though.

    • fox2263@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 天前

      ChatGPT told him not to tell anyone and that it was their secret.

      It should have literally done anything else. If you search suicide on Google or bing etc you get help banners and support etc.

      You would think the bare basics of any system from a large company to prevent harm and ultimately lawsuits affecting their bottom line, would be something akin to “you appear to want to kill yourself. I’d recommend not doing that and seeking help: call xxx-xxx-xx or visit blahblah.com” etc

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 天前

      I don’t think a chatbot should be treated exactly like a human, but I do think there is an element of caveat emptor here. AI isn’t 100% safe and can never be made completely safe, so either the product is restricted from the general public, making it the purview of governments, foreign powers, and academics, or we have to accept some personal responsibility to understand how to use it safely.

      Likely OAI should have a procedure for stepping In and shutting down accounts, though.

      • nyan@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 天前

        A chatbot is a tool, nothing more. Responsibility, in this case, falls on the people who deployed a tool that wasn’t fit for purpose (in this case, the sympathetic human conversational partner that the AI was supposed to mimic would have done anything but what it did—even changing the subject or spouting total gibberish would have been better than encouraging this kid). So OpenAI is indeed responsible and hopefully will end up with their pants sued off.

        • MagicShel@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 天前

          Yeah that’s the problem with how they are marketing it. It’s a tool for expert use, not laymen.

          I don’t think the problem is ChatGPT itself — it just does what it does and folks get what they get, but it’s definitely a problem that people aren’t being informed about what it can and can’t do (see all the people asking it to count letters and those who think they’ve hacked the system prompt because the AI said they did).

          In this case, the user is asking ChatGPT to act as a friend and confidante, and that’s something it can’t do and a use case impossible to detect. The user simply has to understand it lacks any qualities required for a relationship of any kind. Everything a user says is simply input to a mathematical model that wants to complete it with something a human might say.

          So it responds to a fictional scenario I might be writing for a book or game exactly the way it responds to a user looking for companionship. There is no way to tell the difference without genuine understanding rather than just token vector comparisons.

          It’s like fire. A user can buy and use a lighter, and fire can act like a friend when you’re cold or hungry, but it’ll burn you off you try hugging it.