• uuldika@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    16 hours ago

    LLMs are trained on human writing, so they’ll always be fundamentally anthropomorphic. you could fine-tune them to sound more clinical, but it’s likely to make them worse at reasoning and planning.

    for example, I notice GPT5 uses “I” a lot, especially saying things like “I need to make a choice” or “my suspicion is.” I think that’s actually a side effect of the RL training they’ve done to make it more agentic. having some concept of self is necessary when navigating an environment.

    philosophical zombies are no longer a thought experiment.