Much of what’s known as ‘AI’ has nothing to do with progress — it’s about lobbyists pushing shoddy digital replacements for human labour that increase billionaire’s profits and make workers’ lives worse.
chatbots like gpt and gemini learn from conversations with veiwers, so what we need is a virus that will pretend to be a user and flood its chats with pro racism arguments and sexist remarks, which will rub off on the chatbots making them unacceptable for public use
they look at your speech patterns and the specific words you use to make the way they talk seem more familiar, remember when twitter launched its own ai that would post tweets and learn from other posts, they had to take it down after about 15 hours because it became super racist and homaphobic
Training LLMs based on tweets is one thing, training it on chats with users is something completely different. I don’t think this actually happens. The model would degrade extremely fast.
chatbots like gpt and gemini learn from conversations with veiwers, so what we need is a virus that will pretend to be a user and flood its chats with pro racism arguments and sexist remarks, which will rub off on the chatbots making them unacceptable for public use
Nope, they mostly learn during training
hmmmm damn alright
So, just like actual users?
it would be easier to automate the process instead of using real people
https://en.wikipedia.org/wiki/Tay_(chatbot)
You’re thinking it would require effort or coordination on the part of real people, instead of it being default behaviour for some
Been there. Done that
what did you do?
Yeah. GROK and Twitter have entered the chat. Seriously though, we’ve regressed pretty far in what the general public deems acceptable.
How do models learn from conversations with users?
they look at your speech patterns and the specific words you use to make the way they talk seem more familiar, remember when twitter launched its own ai that would post tweets and learn from other posts, they had to take it down after about 15 hours because it became super racist and homaphobic
Training LLMs based on tweets is one thing, training it on chats with users is something completely different. I don’t think this actually happens. The model would degrade extremely fast.
your right im pretty sure i got that mixed up, sorry!