• Lyra_Lycan@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    43
    ·
    4 days ago

    Plot twist: they’re 256MB drives from 2002 and total… 61.44GB. Still impressive, nvm. If they were the largest available currently (36TB) they’d total 8.64PB

    • thisbenzingring@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      20
      ·
      4 days ago

      that array is a POS. Changing failed drives in that would be a major pain in the ass… and the way it doesn’t disapate heat, those drives probably failed pretty regularly.

      • EtherWhack@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        4 days ago

        JBODS like those are actually pretty common in data centers though and are popular with cold storage configs that don’t keep drives spun up unless needed.

        For the cooling, they usually use the pressure gradient between what’re called cold and hot aisles to force air through the server racks. The pressure also tends to be strong enough that passive cooling can be used and any fans on the hardware would be more used to just direct the airflow.

      • Fuck u/spez@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        ·
        4 days ago

        If you’re paying per U of rack space for colocation then maximizing the storage density is going to be a bigger priority than ease of maintenance, especially since there should be multiple layers of redundancy involved here.

        • thisbenzingring@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          4 days ago

          you still have to replace failed drives, this design is poor.

          I work in a datacenter that has many drive arrays, my main storage space direct array has 900TB with redundancy. I have been pulling old arrays out and even some of the older ones are better then this if they have front loading drives cages.

          there is no airflow gaps in that thing… I bet the heat it generates is massive

          • Agent641@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            They probably wait for like 20%of the drives in an array to fail before taking it offline and swapping them all out.

            Also, this doesn’t sound like the architects problem, sounds like the techs problem 🤷

    • Bad Jojo@lemmy.blahaj.zoneM
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      The interface is SATA, not EIDE or SCSI, so I’m going to guess 2TB minimum but I’d bet they are more than likely 8TB drives.

    • Ging@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      This is a helluva range, do any wizards have a best guess at how much total disk space we’re looking at here?