• groet@feddit.org
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      I think there is a bit of nuance to it. The AI usually rereads the chatlog to “remember” the past conversation and generates the answer based on that+your prompt. I’m not sure how they handle long chat histories, there might very well be a “condensed” form of the chat + the last 50 actually messages + the current prompt. If that condensed form is transient then the AI will forget most of the conversation on a crash but will never admit it. So the personality will change because it lost a lot of the background. Or maybe they update the AI so it interprets that condensed form differently

      • skisnow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        I had an encounter with this only today. While doing a Research task it made an offhand comment about something, and when I asked it to elaborate further it had no clue why it said it, because all its goldfish memory had to go on was the transcript of what its previous execution had output.