Highly doubt it’s worth it in the long run due to electricity costs alone
Depends.
Toss the GPU/wifi, disable audio, throttle the processor a ton, and set the OS to power saving, and old PCs can be shockingly efficient.
You can slow the RAM down too. You don’t need XMP enabled if you’re just using the PC as a NAS. It can be quite power hungry.
Eh, older RAM doesn’t use much. If it runs close to stock voltage, maybe just set it at stock voltage and bump the speed down a notch, then you get a nice task energy gain from the performance boost.
There was a post a while back of someone trying to eek every single watt out of their computer. Disabling XMP and running the ram at the slowest speed possible saved like 3 watts I think. An impressive savings, but at the cost of HORRIBLE CPU performance. But you do actually need at least a little bit of grunt for a nas.
At work we have some of those atom based NASes and the combination of lack of CPU, and horrendous single channel ram speeds makes them absolutely crawl. One HDD on its own performs the same as this raid 10 array.
Stuff designed for much higher peek usage tend to have a lot more waste.
For example, a 400W power source (which is what’s probably in the original PC of your example) will waste more power than a lower wattage on (unless it’s a very expensive one), so in that example of yours it should be replaced by something much smaller.
Even beyond that, everything in there - another example, the motherboard - will have a lot more power leakage than something designed for a low power system (say, an ARM SBC).
Unless it’s a notebook, that old PC will always consume more power than, say, an N100 Mini-PC, much less an ARM based one.
For example, a 400W power source (which is what’s probably in the original PC of your example) will waste more power than a lower wattage on
in my experience power supplies are more efficient near the 50% utilization. be quiet psus have charts about it
The way one designs hardware in is to optimize for the most common usage scenario with enough capacity to account for the peak use scenario (and with some safety margin on top).
(In the case of silent power sources they would also include lower power leakage in the common usage scenario so as to reduce the need for fans, plus in the actual physical circuit design would also include things like airflow and having space for a large slower fan since those are more silent)
However specifically for power sources, if you want to handle more power you have to for example use larger capacitors and switching MOSFETs so that it can handle more current, and those have more leakage hence more baseline losses. Mind you, using more expensive components one can get higher power stuff with less leakage, but that’s not going to happen outside specialist power supplies which are specifically designed for high-peak use AND low baseline power consumption, and I’m not even sure if there’s a genuine use case for such a design that justifies paying the extra cost for high-power low-leakage components.
In summary, whilst theoretically one can design a high-power low-leakage power source, it’s going to cost a lot more because you need better components, and that’s not going to be a generic desktop PC power source.
That said, I since silent PC power sources are designed to produce less heat, which means have less leakage (as power leakage is literally the power turning to heat), even if the with the design having been targetted for the most common usage scenario of that power source (which is not going to be 15W) that would still probably mean better components hence lower baseline leakage, hence they should waste less power if that desktop is repurposed as a NAS. Still won’t beat a dedicated ARM SBC (not even close), but it might end up cheap enough to be worth it if you already have that PC with a silent power source.
So I did this, using a Ryzen 3600, with some light tweaking the base system burns about 40-50W idle. The drives add a lot, 5-10W each, but they would go into any NAS system, so that’s irrelevant. I had to add a GPU because the MB I had wouldn’t POST without one, so that increases the power draw a little, but it’s also necessary for proper Jellyfin transcoding. I recently swapped the GPU for an Intel ARC A310.
By comparison, the previous system I used for this had a low-power, fanless intel celeron, with a single drive and two SSDs it drew about 30W.
Ok, im glad im not the only one that wants a responsive machine for video streaming.
I ran a pi400 with plex for a while. I dont care to save 20W while I wait for the machine to respond after every little scrub of the timeline. I want to have a better experience than Netflix. Thats the point.
I’m still running a 480 that doubles as a space heater (I’m not even joking; I increase the load based on ambient temps during winter)
A 486, eh?
A desktop running a low usage wouldn’t consume much more than a NAS, as long as you drop the video card (which wouldn’t be running anyways).
Take only that extra and you probably have a few years usage before additional electricty costs overrun NAS cost. Where I live that’s around 5 years for an estimated extra 10W.
as long as you drop the video card
As I wrote below, some motherboards won’t POST without a GPU.
Take only that extra and you probably have a few years usage before additional electricty costs overrun NAS cost. Where I live that’s around 5 years for an estimated extra 10W.
Yeah, and what’s more, if one of those appliance-like NASes breaks down, how do you fix it? With a normal PC you just swap out the defective part.
Most modern boards will. Also there’s integrated graphics on basically every single current CPU. Only AMD on AM4 held out on having iGPUs for so damn long.
Nah. I dissagree. My dedicated NAS system consumes around 40W idling and is very small sized machine. My old PC would utilize 100W idling and is ATX-sized case. Of course I can use my old PC as a NAS, but these two are different category devices.
I want to reduce wasteful power consumption.
But I also desire ECC for stability and data corruption avoidance, and hardware redundancy for failures (Which have actually happened!!)
Begrudgingly I’m using dell rack mount servers. For the most part they work really well, stupid easy to service, unified remote management, lotssss of room for memory, thick PCIe lane counts, stupid cheap 2nd hand RAM, and stable.
But they waste ~100 watts of power per device though… That stuff ads up, even if we have incredibly cheap power.
If your PC has 32gb of RAM or more throw it away (in my trash bin) immediately.
OK. Science time. Somewhat arbitrary values used, the point is there is a amortization calculation, you’ll need to calculate your own with accurate input values.
A PC drawing 100W 24/7 uses 877 kWh@0.15 $131.49 per year.
A NAS drawing 25W 24/7 uses 219 kWh@0.15 $32.87 per year
So, in this hypothetical case you “save” about $100/year on power costs running the NAS.
Assuming a capacity equivalent NAS might cost $1200 then you’re better off using the PC you have rather than buying a NAS for 12 years.
This ignores that the heat generated by the devices is desirable in winter so the higher heat output option has additional utility.
This ignores that the heat generated by the devices is desirable in winter so the higher heat output option has additional utility.
But the heat is a negative in the summer. So local climate might tip the scales one way or the other.
Assuming a capacity equivalent NAS might cost $1200
Either you already have drives and could use them in a new NAS or you would have to buy them regardless and shouldn’t include them in the NAS price.
I bought a two bay Synology for $270, and a 20TB hdd for $260. I did this for multiple reasons. The HDD was on sale so I bought it and kept buying things. Also I couldn’t be buggered to learn everything necessary to set up a homemade NAS. Also also i didn’t have an old PC. My current PC is a Ship of Theseus that I originally bought in 2006.
You’re not wrong about an equivalent NAS to my current pc specs/capacity being more expensive. And yes i did spend $500+ on my NAS And yet I also saved several days worth of study, research, and trial and error by not building my own.
That being said, reducing e-waste by converting old PCs into Jellyfin/Plex streaming machines, NAS devices, or personal servers is a really good idea
In the UK the calculus is quite different, as it’s £0.25/kWh or over double the cost.
Also, an empty Synology 4-bay NAS can be gotten for like £200 second hand. Good enough if you only need file hosting. Mine draws about 10W compared to an old Optiplex that draws around 60W.
With that math using the NAS saves you 1.25 pence per hour. Therefore the NAS pays for itself in around about 2 years.
my gaming pc runs at like 50w idle and only draws a ton of power if its being used for something. It would be more accurate to consider a PC to be 1.75x more power than a NAS but then account for the cost of buying a NAS. I’d say NAS would probably take 2-4 years to pay off depending on regional power prices.
… 100W? Isn’t that like a rally bygone era? CPUs of the past decade can idle at next to nothing (like, there isn’t much difference between an idling i7/i9 and a Pentium from the same era/family).
Or are we taking about arm? (Sry, I don’t know much about them.)
All devices on the computer consume power.
The CPU being the largest in this context. Older processors usually don’t have as aggressive throttling as modern ones for low power scenarios.
Similarly, the “power per watt” of newer processors is incredibly high in comparison, meaning they can operate at much lower power levels while running the same workload.
In the fall/winter in northern areas it’s free! (Money that would already be spent on heating).
Summer is a negative though, as air conditioning needs to keep up. But the additional cost is ~1/3rd the heat output for most ACs (100w of heat require < 30w of refrigeration losses to move)
And as usual everyone is saying NAS, but talking about servers with a built in NAS.
I’m not saying you can’t run your services on the same machine as your NAS, I’m just confused why every time there’s a conversation about NASs it’s always about what software it can run.
At this point you’re just fighting semantics. Even a commercial NAS is reliant on the software too, like with Synology. They run the disk management but also can run Docker and VMs with their built-in hypervisor.
The way I see it, a box of drives still needs something to connect it to your network.
And that something that can only do a basic connection costs only a little less than something that can run a bunch of other stuff too.
You can see why it all gets bundled together.
The main concern with old hardware is probably powerdraw/efficiency, depending on how old your PC is, it might not be the best choice. But remember: companies are getting rid of old hardware fairly quickly, they can be a good choice and might be available for dirt cheap or even free.
I recently replaced my old Synology NAS from 2011 with an old Dell Optiplex 3050 workstation that companies threw away. The system draws almost twice the power (25W) compared to my old synology NAS (which only drew 13W, both with 2 spinning drives), but increase in processing power and flexibility using TrueNAS is very noticable, it allowed me to also replace an old raspberry pi (6W) that only ran pihole.
So overall, my new home-server is close in power draw to the two devices it replaced, but with an immense increase in performance.
True for notebooks. (For years my home NAS was an old Asus EEE PC)
Desktops, on the other hand, tend to consume a lot more power (how bad it is, depends on the generation) - they’re simply not designed to be a quiet device sitting on a corner continuously running a low CPU power demanding task: stuff designed for a lot more demanding tasks will have things like much bigger power sources which are less efficient at low power demand (when something is design to put out 400W, wasting 5 or 10W is no big deal, when it’s designed to put out 15W, wasting 5 or 10W would make it horribly inefficient).
Meanwhile the typical NAS out there is running an ARM processor (which are known for their low power consumption) or at worse a low powered Intel processor such as the N100.
Mind you, the idea of running you own NAS software is great (one can do way more with that than with a proprietary NAS, since its far more flexible) as long as you put it in the right hardware for the job.
I have used laptops like this and I find that eventually the cooling system fails, probably because they aren’t meant to run all the time like a server would be. various brands including Dell and Lenovo and MSI and Apple. maybe it’s the dust in my house. I don’t know
How does a notebook—outside of including a DAS—provide meaningful storage volume?
When I had my setup with an ASUS EEE PC I had mobile external HDDs plugged to it via USB.
Since my use case was long-term storage and feeding video files to a Media TV Box, the bandwidth limit of USB 2.0 and using HDDs rather than SDDs was fine. Also back then I had 100Mbps ethernet so that too limited bandwidth.
Even in my current setup where I use a Mini-PC to do the same, I still have the storage be external mobile HDDs and now badwidth limits are 1Gbps ethernet and USB 3.0, which is still fine for my use case.
Because my use case now is long term storage, home file sharing and torrenting, my home network is using the same principles as distributed systems and modern microprocessor architectures: smaller faster data stores with often used data close to were its used (for example fast smaller SDDs with the OS and game executables inside my gaming machine, plus a torrent server inside that same Mini-PC using its internal SDD) and then layered outwards with decreasing speed and increasing size (that same desktop machine has an internal “storage” HDD filled with low use files, and one network hop from it there’s the Mini-PC NAS sharing its external HDDs containing longer term storage files).
The whole thing tries to balance storage costs and with usage needs.
I suppose I could improve performance a bit more by setting up some of the space in the internal SDD in the Mini-PC as a read/write cache for the external HDDs, but so far I haven’t had the patience to do it.
I used to design high performance distributed computing systems and funnilly enough my home setup follows the same design principles (which I had not noticed until thinking about it now as I wrote this).
I started my media server in 2020 with an e-wasted i7 3770 dell tower I snagged out of the ewaste pile. Ran jellyfin, audiobookbay, navidrome, calibre-web and an arr stack with about a dozen users like a champ. Old hardware rules if you don’t use windows
Mime didn’t, but it is from 2006. I think it is messed up.
Big shout out to Windows 11 and their TPM bullshit.
Was thinking that my wee “Raspberry PI home server” was starting to feel the load a bit too much, and wanted a bit of an upgrade. Local business was throwing out some cute little mini PCs since they couldn’t run Win11. Slap in a spare 16 GB memory module and a much better SSD that I had lying about, and it runs Arch (btw) like an absolute beast. Runs Forgejo, Postgres, DHCP, torrent and file server, active mobile phone backup etc. while sipping 4W of power. Perfect; much better fit than an old desktop keeping the house warm.
Have to think that if you’ve been given a work desktop machine with a ten-year old laptop CPU and 4GB of RAM to run Win10 on, then you’re probably not the most valued person at the company. Ran Ubuntu / GNOME just fine when I checked it at its original specs, tho. Shocking, the amount of e-waste that Microsoft is creating.
Question, what’s the benefit of running a separate DHCP server?
I run openwrt, and the built in server seems fine? Why add complexity?
I’m sure there’s a good reason I’m just curious.
The router provided with our internet contract doesn’t allow you to run your own firmware, so we don’t have anything so flexible as what OpenWRT would provide.
Short answer; in order to Pi-hole all of the advertising servers that we’d be connecting to otherwise. Our mobile phones don’t normally allow us to choose a DNS server, but they will use the network-provided one, so it sorts things out for the whole house in one go.
Long, UK answer: because our internet is being messed with by the government at the moment, and I’d prefer to be confident that the DNS look-ups we receive haven’t been altered. That doesn’t fix everything - it’s a VPN job - but little steps.
The DHCP server provided with the router is so very slow in comparison to running our own locally, as well. Websites we use often are cached, but connecting to something new takes several seconds. Nothing as infuriating as slow internet.
Oh you mean DNS server, yes ok that makes sense. Yeah I totally understand running your own.
If I understand correctly, DHCP servers just assign local IPs on initial connection, and configure other stuff like pointing devices to the right DNS server, gateway, etc
Sorry, putting the two things together, my mistake. My router doesn’t let you specify the DNS server directly, but it does allow you to specify a different DHCP server, which can then hand out new IPs with a different DNS server specified, as you say. Bit of a house of cards. DHCP server in order to be the DNS server too.
I’ve made a decent NAS out of a Raspberry Pi 4. It used USB to SATA converters and old hard drives.
My setup has one 3Tb drive and two 1.5Tb drives. The 1.5Tb drives form a 3Tb drive using RAID and then combines with the 3Tb drive to make redundant storage.
Yes it’s inefficient AF but it’s good enough for full HD streaming so good enough for me.
I’m too stingy to buy better drives.
Better to build it from scratch, your desktop PC does not have server-grade hardware. No ECC, no IPMI, not enough SATA ports, etc.
I think the self-hosting community needs to be more honest with itself about separating self hosting from building server hardware at home as separate hobbies.
You absolutely don’t need sever-grade hardware for a home/family server, but I do see building a proper server as a separate activity, kinda like building a ship in a bottle.
That calculation changes a bit if you’re trying to host some publicly available service at home, but even that is a bit of a separate thing unless you’re running a hosting business, at which point it’s not a really a home server anyways, even if it happens to sit inside your house.
None of that really matters for a home media server. Even the limited SATA ports, worst case you have to grab a cheap expansion card.
Power consumption is a much bigger concern, a purpose built NAS is much more efficient than a random old PC.
I used to have a 5700G system that I had to switch out to a 14600k system due to quciksync pass through.
I got my 14600K down to 55w from 75w with everything else being equal. Insane how efficient some setups can be.
My 16tb Pi sips at 13w max or 8w idle. But no encoding or enough storage for normal work. So it’s warm storage
I don’t know why yall are being so NASty… seriously what’s a NAS?
Network Attached Storage
The number one concern with a NAS is the power draw. I can’t think of many systems that run under 30W.











