- cross-posted to:
- selfhosted@lemmy.world
- cross-posted to:
- selfhosted@lemmy.world
Context
Having started out in the world of Napster & Limewire, I’ve always relied on public sources. It wasn’t until in the early '10s that I lucked into a Gazelle-based tracker that was started by some fellow community members. Unfortunately, I wasn’t paying enough attention when they closed shop and didn’t know how to move elsewhere. Combined with some life circumstances I gave up the pursuit for the time being.
It wasn’t until recently that a friend was kind enough to help me get back and introduced me to current state of automation. Over the course of a few months, I’ve since built up the attached systems. I’ve been having an absolute blast learning and am very impressed with all of the contributions!
After all of the updates due to BF deals, I put together the attached diagram as it was starting to get too complex to keep all of the interactions in my head. 😅
Setup
- All of the services run in Docker containers.
- Each container is a separate Compose file managed by Systemd.
- The system itself is in a VM running on my home server (both Arch, btw).
- Tailscale is used for remote access to the local network.
- ProtonVPN is managed by Gluetun and uses a separate network for isolating services.
Questions
- What am I missing or can be improved?
- Is there a better way to document?
- What do you do differently that might be beneficial?
Thoughts
- I had Calibre set up at one point, but I really don’t like how it tracks files by renaming them. I have been considering trying to automate with the CLI instead, but haven’t gotten around to it yet.
- I’ve been toying with the idea of creating a file-arr for analyzing disk usage, performing common operations, and exposing a web-based upload/download client so I don’t have to mount the volume everywhere.
- Similarly, I’m interested in a way to aggregate logs/notifications/metrics. I’m aware of Notifiarr, but would prefer a self-hosted version.
- I just set up Last FM scrobbling so I don’t have any data yet. I’m hoping to use that for discovery and if possible, playlist syncing or auto-generation.
Notes
- Diagram was made using D2lang.
- Some of the connections have been simplified to improve readability / routing.
- Some services have been redacted out of an abundance of caution.
- I know VPN with Usenet isn’t necessary, but it’s easier to keep it consistent.
Also, thanks for the recommendations to check out deemix/Deezer. That worked really well! 😀
Edit: HQ version of diagram
Very nice. Can you share a
docker-compose.yml
for others to replicate this? Also your diagram could be a bit higher quality.Each service is a separate
docker-compose.yml
, but they are more-or-less the same as the example configs provided by each service. I did it this way as opposed to a single file to make it easier to add/remove services following this pattern.I do have a higher quality version of the diagram, but had to downsize it a lot to get pictrs to accept it…
Ah your instance must be limiting the size. lemmy.dbzer0.com allows you to upload anything and just downscales to 1024px max dimention. You can also just host on imgur etc.
Good point, updated with HQ link.
sheeeeesh.
Reminds me of factorio
The factory must grow
Gosh, a dream setup. I’m so far yet…
#humblebrag lol
Seriously tho, this is super awesome. I was gifted an 8 bay NAS several months ago and caught the bug again too. I’ve been slowly changing out the 4TB drives with the 16TB ironwolf pro’s and downloading all the things. I have sonarr, prowlarr, and syncthing working so far, but I have to say, that was a pretty big pain in my assholes.
I have been running my server from an old 2018 Mac mini that I had laying around and just the other day found a good deal on a nicer NUC for Black Friday. I’d like to take it up a notch when I do the migration & add radarr, overseerr, and it sounds like dockerr and some others as well. This post was just the inspiration I needed!
Do you have any resources you could share that you used, or at least that you wish you would’ve used to educate yourself and/or simplify things? Most of what I’ve accomplished so far has just been through random discoveries in forums & research I’ve done from there. It feels a bit amateur and I’m wondering whether or not I should just start from scratch. I’m assuming there has to be a site where I can read about all my options & how they interact.
Cheers man, thanks!
The wiki is a great place to start. Also, most of the services have pretty good documentation.
The biggest tip would be to start with Docker. I had originally started running the services directly in the VM, but quickly ran into problems with state getting corrupted somewhere. After enough headaches I switched to Docker. I then had to spend a lot of time remapping all of the files to get it working again. Knowing where the state lives on your filesystem and that the service will always restart from a known point is great. It also makes upgrades or swapping components a breeze.
Everyone has to start somewhere. Just take it slow and do be afraid to make mistakes. Good luck and have fun! 😀
I’m a little lost on what each of these components are. I see .sh files so I’m assuming you’re mostly writing these with Bash?
With this level of complexity I wonder if you’d benefit from running a k8s server. Just food for thought.
Looks like you’re having a good time for it. I always laugh at the similarity with this system building and the BUS designs of Factorio.
The
systemd.timers
are basically cronjobs for scripts I wrote to address a few of the pain points I’ve encountered with the setup. They’re either simplecurl
orwget
andjq
calls or use Python for more complex logic. The rest are services that are either a part of or adjacent to *arrs.As for k8s, personally I feel that would add more complexity than it’s worth. I’m not looking for a second job. 😛
“But Kubernetes will simplify everything!!!1”
🤷♂️
I mean all problems are solved with another layer of abstraction right?
We need to go deeper
I don’t see Watchtower in there anywhere. Even just used as a simple on-demand updater, it’s worth the time to set it up. (Which is pretty minimal anyhow.) But it can also just run automatically and keep things up to date all the time.
While there’s nothing particularly wrong with putting everything through a vpn, you could use a qbittorrentvpn docker image which runs a wireguard client with a kill switch which the torrent client can tunnel through.
The problem I’ve found is that the services will query indexers and that not all of the trackers allow you to use multiple IPs. This is where I found it easier to make all outbound requests go through the VPN so I didn’t get in trouble. It’s also why I have the Firefox container set up inside the network with it exposed over the local network as a VNC session. So I can browse the sites while maintaining a single IP.
I do have qbittorrent set up with a kill switch on the VPN interface managed by Gluetun.
If you don’t already, you can setup “healthchecks” for your containers, specially useful for qbit and Gluetun. That way, you may restart one if any condition fails using Autoheal.
Also check qbitmanage to setup seeding goals.
And best of all, where is Recyclarr? Sync that bitch right into your arrs to get consistently only the very best out there.
There’s some overlap with my
torrrents.py
and qbitmanage, but some of its other features sound nice. It also led me to Apprise which might be the notifications solution I’ve been looking for!Some of the arr-scripts already handle syncing the settings. I had to turn them off because it kept overwriting mine, but Recyclarr might be more configurable.
Thanks!
This guy automates.
If you want something for managing all your containers, consider Portainer. I’ve been using it with my homelab for a while and it’s invaluable for quickly dealing with issues that crop up.
Given what you’ve got running I only really recommend, as other have, portainer. It’s made my life so much easier. Edited this since I saw you have homarr and I must’ve missed it the first time.
just an fyi, DO NOT put your arr’s behind a VPN it will cause issues https://wiki.servarr.com/radarr/faq#vpns-jackett-and-the-arrs
I get what they’re saying and it may be ‘technically correct’, but the issue is more nuanced than that. In my experience, some trackers have strict requirements or restricted auth tokens (e.g. can’t browse & download from different IPs). Proxying may be the solution, but I’d have to look at how it decides what traffic gets routed where.
https://trash-guides.info/Prowlarr/prowlarr-setup-proxy/ is useful when setting up the proxy in prowlarr for your indexers
Also we say don’t put the arr’s behind a VPN because cloudflare likes to just ban IP’s at times which will result in the arr’s not being abloe to access the arr metadata layers
You’re running docker inside a vm? Why?
The first thing I would do is learn the 5-layer OSI model for networking. (The 7-layer is more common, but wrong). Start thinking of things in terms of services and layers. Make a diagram for each layer (or just the important layers. Layers 3 and up.)
If you can stomach it, learn network namespaces. It lets you partition services between network stacks without container overhead.
Using a vm or docker for isolation is perfectly fine, but don’t use both. Either throw docker on your host or put them all in as systemd services on a vm.
The server itself is running nothing but the hypervisor. I have a few VMs running on it that makes it easy provision isolated environments. Additionally, it’s made it easy to snapshot a VM before performing maintenance in case I need to roll back. The containers provide isolation from the environment itself in the event of a service gone awry.
Coming from cloud environments where everything is a VM, I’m not sure what issues you’re referring to. The performance penalty is almost non-existent while the benefits are plenty.
I recently rebuilt my home server using containers instead of (qemu/KVM) VMs and I notice a performance benefit in some areas. Although I just use systemd-nspawn containers rather than docker as I don’t really see the need to install 3rd party software for a feature already installed on my OS.
I handle snapshots by using btrfs. Works great
At this point, it’s easier to just pay for all of the streaming services.
For a long time, that was the case. Then the greed nation attacked. Now they’ve reproduced the cable model on the web and more than half of which have terrible clients / infrastructure.
If I could pay for a single service that operated similar to this setup:
- Tell it what I’d like to watch while also displaying similar content for discovery.
- Tracking progress in every show (while not forgetting!).
- Not losing content I have been watching as it’s now in ‘another castle’.
- A single place to view all tracked shows rather than loading each service individually.
I probably would sign up for it as that’s what was so successful for Netflix until all of the studios thought they could do better. And now the consumer has to suffer the consequences.
Maybe if you’re new to all this and/or have no interest. But if you’ve been tinkering for more than a few years, it’s just a PC version of project car. It’s something you tinker with on the weekends, adding and refining as you go. I would never be able to negotiate multiple streaming services in a unified way to my satisfaction. So it’s not as if I really even have the option of paying for what I actually want from a service.
This is taking it above and beyond I’ll agree. I’m still in the old times where I’m manually finding my movie/show/etc and doing all the leg work by hand, only because I haven’t had time to learn all the modern stuff. But things like this are a great resource to get up to speed.
If you have the time and resources, I highly recommend it. Once it’s all running it becomes mostly a ‘set it and forget it’ situation. You don’t have to remember to scroll through pages of search results to find content. It’ll automatically grab them for you based on your configured quality profile (or upgrade it to better quality). Additionally, you can easily stream it to any devices in our home network (or remote with a VPN).
You don’t have to do it all at once. Start with a single service you’re interested in and slowly add more over time.
Nah I feel ya, I’ve been seeing all the various configurations since I came to Lemmy. I’ve just had a hell of a summer work-wise (6-7 days a week, yay being a small business owner) plus a good amount of travel for weddings and the like, so I just haven’t had the time to sit down with it all. One of my clients has a whole bunch of 2 year old metal and SAS drives they’re giving me in a few weeks once they get underway with operations due to a corporate upgrade, and once I slow down for the winter season I fully intend on diving into this.
I’ve had a Plex server running on FreeBSD for years, I just haven’t set it up since moving back in June. But I’m getting really tired of all the bullshit from these streaming services, and I’m looking forward to taking back control of my entertainment. I just have to make it palatable to my wife lol.
To me the *arrs are great enough for the wishlist/todo factor alone.
The download/management feature is just the cherry on top.
I honestly took a shot at that. I didn’t have everything but I had most of it. I also had every channel available on cable.
The streaming wars have honestly ruined it. If you’re just looking for something to watch you of course will be able to find something. But if you specifically want specific content you might as well flip a coin. Oh it’s on Netflix, no Netflix lost lost that license, Oh Max has that, Wait no Max went under, no wait they’re back but they don’t have it anymore. Oh that’s a Disney property Disney+ should have it, nope Disney pulled that offline for the time being.
Screw it I’ll make my own streaming service with hookers and blackjack.
You can simplify it way down to Kodi + RD and have your own streaming service. This looks more like a hobby though to get every little thing just right.