Carla Rover once spent 30 minutes sobbing after having to restart a project she vibe coded. Rover has been in the industry for 15 years, mainly working as a web developer. She’s now building a startup, alongside her son, that creates custom machine learning models for marketplaces.
Using AI to sell AI, infinite money glitch! /s
“Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, ‘Please take this into the dining room and pour coffee for the family,’” Rover said. Can they do it? Possibly. Could they fail? Definitely. And most likely, if they do fail, they aren’t going to tell you.
No, a kid will learn if s/he fucks up and, if pressed, will spill the beans. AI is, despite being called “intelligent”, not learning anything from its mistakes and often forgetting things because of limitations - consistency is still one of the key problems for all LLM and image generators
Considering how many AI models still can’t correctly count how many ‘r’ there are in “strawberry”, I doubt it. There’s also the seahorse emoji doing the rounds at the moment, you’d think that the models would get “smart” after repeatedly failing and realize it’s an emoji that has never existed in the first place.
That’s the P in ChatGPT: Pre-trained. It has “learned” based on the set of data it has been trained on, but prompts will not have it learn anything. Your past prompts are kept to use as “memory” and to influence output for your future prompts, but it does not actually learn from them.
Using AI to sell AI, infinite money glitch! /s
No, a kid will learn if s/he fucks up and, if pressed, will spill the beans. AI is, despite being called “intelligent”, not learning anything from its mistakes and often forgetting things because of limitations - consistency is still one of the key problems for all LLM and image generators
If you bring a 6yo into office and tell them to do your work for you, you should be locked up. For multiple reasons.
Not sure why they thought that was a positive comparison.
Don’t they also train new models on past user conversations?
Considering how many AI models still can’t correctly count how many ‘r’ there are in “strawberry”, I doubt it. There’s also the seahorse emoji doing the rounds at the moment, you’d think that the models would get “smart” after repeatedly failing and realize it’s an emoji that has never existed in the first place.
Chatgpt5 can count the number of 'r’s, but that’s probably because it has been specifically trained to do so.
I would argue that the models do learn, but only over generations. So slowly and specifically.
They definitely don’t learn intelligently.
That’s the P in ChatGPT: Pre-trained. It has “learned” based on the set of data it has been trained on, but prompts will not have it learn anything. Your past prompts are kept to use as “memory” and to influence output for your future prompts, but it does not actually learn from them.
The next generation of GPT will include everyone’s past prompts (ever been A/B tested on openAI?). That’s what I mean by generational learning.
Maybe. It’s probably not high quality training data for the most part, though.