• itsame@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 day ago

    No. You would use a base model (GPT-4o) to get a reliable language model to which you would add a set of rules that the chat bot follows. Every company has its own rules, it is already widely in use to add data like company-specific manuals and support documents. Not rocketscience at all.

    • forrgott@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      So many examples of this method failing I don’t even know where to start. Most visible, of course, was how that approach failed to stop Grok from “being woke” for like, a year or more.

      Frankly, you sound like you’re talking straight out of your ass.

      • itsame@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.

        BTW. There are many theories about Grok’s unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.