I think there is a bit of nuance to it. The AI usually rereads the chatlog to “remember” the past conversation and generates the answer based on that+your prompt. I’m not sure how they handle long chat histories, there might very well be a “condensed” form of the chat + the last 50 actually messages + the current prompt. If that condensed form is transient then the AI will forget most of the conversation on a crash but will never admit it. So the personality will change because it lost a lot of the background. Or maybe they update the AI so it interprets that condensed form differently
I had an encounter with this only today. While doing a Research task it made an offhand comment about something, and when I asked it to elaborate further it had no clue why it said it, because all its goldfish memory had to go on was the transcript of what its previous execution had output.
How would you even know it forgot you?
I think there is a bit of nuance to it. The AI usually rereads the chatlog to “remember” the past conversation and generates the answer based on that+your prompt. I’m not sure how they handle long chat histories, there might very well be a “condensed” form of the chat + the last 50 actually messages + the current prompt. If that condensed form is transient then the AI will forget most of the conversation on a crash but will never admit it. So the personality will change because it lost a lot of the background. Or maybe they update the AI so it interprets that condensed form differently
I had an encounter with this only today. While doing a Research task it made an offhand comment about something, and when I asked it to elaborate further it had no clue why it said it, because all its goldfish memory had to go on was the transcript of what its previous execution had output.