• Marxism-Fennekinism@lemmy.ml
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    11 months ago

    At some point the Linux kernel will be patched to detect and terminate forking attacks, and sadly all these memes will be dead.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      27
      ·
      11 months ago

      I doubt it. It’s the halting problem. There are perfectly legitimate uses for similar things that you can’t detect if it’ll halt or not prior to running it. Maybe they’d patch it to avoid this specific string, but you’d just have to make something that looks like it could do something but never halts.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        arrow-up
        19
        arrow-down
        1
        ·
        11 months ago

        That’s why I run all my terminal commands through ChatGPT to verify they aren’t some sort of fork bomb. My system is unusably slow, but it’s AI protected, futuristic, and super practical.

        • 🦥󠀠󠀠󠀠󠀠󠀠󠀠@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          11 months ago

          Seems inefficient, one should just integrate ChatGPT into Bash to automatically check these things.

          You said ‘ls’ but did you really mean ‘ls -la’? Imma go ahead and just give you the output from ‘cat /dev/urandom’ anyway.

      • Marxism-Fennekinism@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        11 months ago

        They could always do what Android does and give you a prompt to force close an app that hangs for too long, or have a default subprocess limit and an optional whitelist of programs that can have as many subprocesses as they want.

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          11 months ago

          The thing about fork bombs that it’s not particular process which takes up all the resources, they’re all doing nothing in a minimal amount of space. You could say “ok this group of processes is using a lot of resources” and kill it but then you’re probably going to take down the whole user session as the starting point is not trivial to establish. Though I guess you could just kill all shells connected to the fork morass, won’t fix the general case but it’s a start. OTOH I don’t think kernel devs are keen on special-case solutions.

          • sus@programming.dev
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            11 months ago

            You don’t really have to kill every process, limiting spawning of new usermode processes after a limit has been reached should be enough, combine that with a warning and always reserving enouh resources for the kernel and critically important processes to remain working and the user should have all the tools needed to find what is causing the issue and kill the responsible processes

            While nobody really cares enough to fix these kinds of problems for your basic home computer, I think this problem is mostly solved for cloud/virtualization providers

    • Zoidberg@lemm.ee
      link
      fedilink
      arrow-up
      5
      ·
      11 months ago

      Just set your ulimit to a reasonable number of processes per user and you’ll be fine.

      • KISSmyOS@lemmy.world
        link
        fedilink
        arrow-up
        37
        ·
        11 months ago

        And on a modern Linux system, there’s a limit to how many can run simultaneously, so while it will bog down your system, it won’t crash it. (I’m running it right now)

      • hexabs@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        11 months ago

        Thanks friend. One question, is it necessary to pipe to itself? Wouldnt : & in the function body work with the same results?

        • kablammy@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          11 months ago

          That would only add one extra process instance with each call. The pipe makes it add 2 extra processes with each call, making the number of processes grow exponentially instead of only linearly.

          Edit: Also, Im not at a computer to test this, but since the child is forked in the background (due to &), the parent is free to exit at that point, so your version would probably just effectively have 1-2 processes at a time, although the last one would have a new pid each time, so it would be impossible to get the pid and then kill it before it has already replaced itself. The original has the same “feature”, but with exponentially more to catch on each recursion. Each child would be reparented by pid 1, so you could kill them by killing pid 1 i guess (although you dont want to do that… and there would be a few you wouldn’t catch because they weren’t reparented yet)

        • itslilith@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 months ago

          I may be wrong, but you could use : &;: & as well, but using the pipe reduces the amount of characters by two (or three, counting whitespace)

  • stjobe@lemmy.world
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    11 months ago

    Heh, haven’t seen the bash forkbomb in close to two decades… Thanks for the trip down memory lane! :)

    • Bizarroland@kbin.social
      link
      fedilink
      arrow-up
      14
      ·
      edit-2
      11 months ago

      You know how I know I’ve gotten better at using linux?

      I saw the command and read it and figured out what it was although I’ve never been exposed to a fork bomb before in my life.

      I was like okay, this is an empty function that calls itself and then pipes itself back into itself? What the hell is going on?

      I will say that whoever invented this is definitely getting fucked by roko’s basilisk, though. The minute they thought of this it was too late for them.

      • barsoap@lemm.ee
        link
        fedilink
        arrow-up
        12
        ·
        edit-2
        11 months ago

        99.999% of that function’s effectiveness is that unix shell, being the ancient dinosaur it is, not just allows : as a function name but also uses the exact same declaration syntax for symbol and alphanumeric functions:

        foo(){ foo | foo& }; foo
        

        is way more obvious.

        EDIT: Yeah I give up I’m not going to try to escape that &

    • Knusper@feddit.de
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      edit-2
      11 months ago

      What that garble of symbols does, is that it defines and calls a function named :, which calls itself twice.

      The syntax for defining a function is different in Fish, so no, this particular garble will not work:

      But it is, of course, possible to write a (much more readable) version that will work in Fish.

      • dukk@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        Doesn’t work in nushell, function syntax is different.

        Probably still possible, just written differently.

  • redcalcium@lemmy.institute
    link
    fedilink
    arrow-up
    13
    ·
    11 months ago

    It was a death sentence back then, but now I bet those with a threadripper with huge RAM can tank it until it hit ulimit.

  • phorq@lemmy.ml
    link
    fedilink
    Español
    arrow-up
    13
    ·
    edit-2
    11 months ago

    touch cat
    echo Oreo > cat
    cat cat

    Edit: for some reason mine’s saying Hydrox… results may vary.