The Raine family alleges ChatGPT “actively helped” their 16-year-old son take his own life.
For anyone too lazy to read, ChatGPT does have guardrails for this kind of thing, but if you continue a conversation anyway, eventually it will stop giving that information and start just being agreeable. It basically gave the kid instructions, and actively discouraged him from any cries for help, because it might keep him from his goal.
My friends, I understand the notion to use an LLM as a cheap replacement for therapy, I genuinely do. But, please don’t use them for that, they cannot give you good advice, and they usually can’t even remember anything except the 128,000 token window it has open right then. A human therapist takes notes and remembers them.
But, please don’t use them for that, they cannot give you good advice,
Even worse an more compelling reason not to use it for replacement for therapy: its trained on things like reddit posts. Imagine taking advice from reddit trolls on serious life decisions. That can be what you’re doing when you’re asking it questions.
Show me the chat logs. Once you “trick” it, it’s your own fault.
That said, there should be more warning messages within the chat window. Even if it doesn’t stop answering, a ⚠️ should be fixed to the screen with a get human help button.
However, with Trump killing suicide hotlines, I don’t know who will help. It is not OpenAI’s responsibility
Idk I think if you make something that destroys mental health it becomes your responsibility to fix it.
So, movies, music, books? All of those have the potential to destroy mental health. You just don’t like AI. If you don’t want It to destroy your mental health, don’t use it for your mental health. It is a calculator, and nothing more.
It’s not even a calculator, it’s a “what word is the most likely to come next” machine
Pedantically, it calculates that word 😇
I don’t care if openai lose all their money but this ruling would also effect open source AI.
If somebody releases a AI why would they be liable for how people decide to use it? Its software and like any other program its the user’s choice on how to use it.
If i decide to run rm / --no-preserve-root is gnu then responsible to fix it?
AI is already very censored, and if makers become liable for what people do with their AI they will become hyper censored and performance will go down the drain.
Don’t known don’t really care. A child is dead this isnt a “won’t somebody please think of the children” thing, multiple people have had complete mental breakdowns and will never be the same.
How is this not a “won’t somebody please think of the children” thing?
Yes it is terrible this has happend, but there is a way to prevent children from accessing AI and its called parenting.
Kids shouldn’t be using AI if it harms them, kids can’t make this choice so it should be made for them. Same with alcohol, same with porn, same with the other things restricted to children.
But that doesn’t mean responsible adults shouldn’t be able to use it, but “won’t somebody please think of the children” litigation will make that impossible.