• wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 days ago

    We are so far away from a paperclip maximizer scenario that I can’t take anyone concerned about that seriously.

    We have nothing even approaching true reasoning, despite all the misuse going on that would indicate otherwise.

    Alignment? Takeoff? None of our current technologies under the AI moniker come anywhere remotely close to any reason for concern, and most signs point to us rapidly approaching a wall with our current approaches.

    Each new version from the top companies in the space right now has less and less advancement in capability compared to the last, with costs growing at a pace where “exponentially” doesn’t feel like an adequate descriptor.

    There’s probably lateral improvements to be made, but outside of taping multiple tools together there’s not much evidence for any more large breakthroughs in capability.

    • bacon_pdp@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      4
      ·
      2 days ago

      I agree currently technology is extremely unlikely to achieve general intelligence but my expression was that we never should try to achieve AGI; it is not worth the risk until after we solve the alignment problem.

      • chobeat@lemmy.mlOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        “alignment problem” is what CEOs use as a distraction to take responsibility away from their grift and frame the issue as a technical problem. That’s another word that make you lose any credibility

        • bacon_pdp@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          2 days ago

          I think we are talking past each other. Alignment with human values is important; otherwise we end up with a paper clip optimizer wanting humans only as a feedstock of atoms or deciding to pull a “With Folded Hands” situation.

          None of the “AI” companies are even remotely interested in or working on this legitimate concern.

      • Balder@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Unfortunately game theory says we’re gonna do it whenever it’s technologically possible.