A year ago I built a NAS to reduce my reliance on cloud services, and set up an arr stack. I went with TrueNAS Scale, which was on Bluefin at the time. In the past 12 months, TrueNAS Scale has been through FOUR major OS versions, with a fifth already announced. At least one of those involved a release train switch so, despite diligently checking for updates in the dashboard, I was left in the dust with an obsolete OS, and didn’t find out until it was already a huge hassle to upgrade.

I’ve been really happy with the utility and benefit of having this tool, but holy smokes how is anybody supposed to keep up with all of this? This is far from my only hobby, and I simply do not have the time, patience, or interest for a constant race to keep up with vetting new release versions and fixing what breaks every 3 weeks. I have enough tinkering hobbies as it is.

On top of that, there’s the whole blow up with TrueCharts, which has also left me with an entire suite of obsolete albatrosses around my NAS that I need to deal with. Am I still waiting for them to figure out an upgrade path? I don’t even know anymore.

Sorry for the rant, but I guess what I’m looking for is: how do you keep up with the constant maintenance and updates, and where do I go from here, in February 2025, with a system running Bluefin 22.12, a 32TB ZFS pool (RAIDZ1) that has to remain intact, and a handful of TrueCharts apps that I don’t want to lose the data from (e.g. Jellyfin configs/watch history)?

  • kalleboo@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 小时前

    This is why I’m still using a Synology ¯\(ツ)

    I can install all the fun stuff I want in Docker, but for the core OS services, it’s outsourced to Synology to maintain for me

  • PieMePlenty@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 小时前

    I use debian, so what’s to keep up with? Apt upgrade is literally everything I need. My home server doesn’t take a lot of my time except when I want to tweak something or introduce something new. I dont really follow all the trendy stuff at all and just have it do what I need.

  • MXX53@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    18 小时前

    I run a Fedora server.

    All of my apps are in docker containers set to restart unless stopped by me.

    Then I run a cron job that is scheduled at like 3 or 4am that runs docker pull on all containers and restarts them. Then it runs all system updates and restarts the server.

    Every week or so I just spot check to make sure it is still working. This has been my process for like 6 months without issue.

  • kylian0087@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 天前

    You can choose a slower train for scale. Go for the stable release or even the enterprise release. Update once in a few months or so.

    I went with Talos OS for my apps after the mess from IX-systems and for the most part it has been set and forget.

    • notfromhere@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 小时前

      Do you run Talos on bare metal or on something like Proxmox? Care to discuss your k8s stack?

  • Fedegenerate@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 天前

    Release: stable

    Keep the updates as hands off as possible. Docker compose, TTeck’s LXC updater, automatic upgrades.

    I come through once a week or so to update the stacks (dockge > stack > update), I come through once a month or so to update the machines (I have 5 total). Total time updating is 3hrs a month. I could drop that time a lot when I get around to writing some scripts to update docker images, then I’d just have to “apt update && apt upgrade”

    Minimise attack surface and outsource security. I have nothing at all open to the internet, I use Tailscale to create tunnels. I’m trusting my security to Tailscale but they are much, much, better at it than I am.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 小时前

      Automatically upgrading docker images sounds like a recipe for disaster because:

      • could pull down change that requires manual intervention, so things “randomly” break
      • docker holds on to everything, so you’d need to prune old images or you’ll eventually run out of disk space; if a container is stopped, your prune would make it unbootable (good luck if the newer images are incompatible with when it last ran)

      That’s why I refuse to automate updates. I sometimes go weeks or months between using a given service, so I’d rather use vulnerable containers than have to go fix it when I need it.

      I run OS updates every month or two, and honestly I’d be okay automating those. I run docker pulls every few months, and there’s no way I’d automate that.

      • Fedegenerate@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 小时前

        I’ve encountered that before with Watchtower updating parts of a serrvice and breaking the whole stack. But automating a stack update, as opposed to a service update, should mitigate all of that. I’ll include a system prune in the script.

        Most of my stacks are stable so aside from breaking changes I should be fine. If I hit a breaking change, I keep backups, I’ll rebuild and update manually. I think that’ll be a net time save over all.

        I keep two docker lxcs, one for arrs and one for everything else. I might make a third lxc for things that currently require manual updates. Immich is my only one currently.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 小时前

          Watchtower

          Glad it works for you.

          Automatic updates of software with potential breaking changes scares me. I’m not familiar with watchtower, since I don’t use it or anything like it, but I have several services that I don’t use very often, but would suck if they silently stopped working properly.

          When I think of a service, I think of something like Nextcloud, Immich, etc, even if they consist of multiple containers. For example, I have a separate containers for libre office online and Nextcloud, but I upgrade them together. I don’t want automated upgrades of either because I never know if future builds will be compatible. So I go update things when I remember, but I make sure everything works after.

          That said, it seems watchtower can be used to merely notify, so maybe I’ll use it for that. I certainly want to be around for any automatic updates though.

          • Fedegenerate@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            30 分钟前

            It’s Watchtower that I had problems with because of what you described. Watchtower will drop your microservice, say a database, to update it and then not reset the things that are dependent on it. It can be great just not in the ham fisted way I used it. So instead I’m going to update the stack together, everything drops, updates, and comes back up in the correct order

            Uptime Kuma can alert you when a service goes down. I am constantly in my Homarr homepage that tells me if it can’t ping a service, then I go investigating.

            I get that it’s scary, and after my Watchtower trauma I was hesitant to go automatic too. But, I’m managing 5 machines now, and scaling by getting more so I have to think about scale.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    31
    ·
    2 天前

    You might want to think about running a “stable” or “LTS” OS and spin up things in Docker instead. That way you only have to do OS level updates very rarely.

    • Zink@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 小时前

      Thanks for this. I’ve recently been recreating my home server on good hardware and have been thinking it’s time to jump into selfhosting more stuff. I’ve used Docker a bit, so I guess I’ll have to do it the right way. It’s always good to know what choices now will avoid future issues.

    • HeyJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 天前

      I learned this the hard way as well… I did a big OS update on mine once and it broke almost every application running on it. Docker worked perfectly still. I transferred everything I could to Docker after that.

  • Darkassassin07@lemmy.ca
    link
    fedilink
    English
    arrow-up
    39
    ·
    edit-2
    2 天前

    OS updates I only bother with every 6-12mo, though I also use debian which doesn’t push major updates all that regularly.

    As far as software goes; pretty much everything is in a docker container with watchtower automatically pulling new updates to those nightly at 4am. It sends me email notifications, so It’ll tell me if an update fails; combined with uptime-kuma notifying me if any of my services is unavailable for whatever reason.

    The rest I’ll usually do with the OS updates. Just because an update was released, doesn’t mean you’ve gotta drop everything and install it right this moment.

  • bluGill@fedia.io
    link
    fedilink
    arrow-up
    1
    ·
    22 小时前

    At least you get updates. I’m running TruNAS core which isn’t updated anymore, and I have some jails doing things so I can’t migrate to scale easially.

    The good news is this still works despite no updates it does everything it used to. There is almost zero reason to update any working NAS if it is behind a firewall.

    The bad news is those jails are doing useful things and because I’m out of date I can’t update what is in them. Some of those services have new versions that add new features that I really really want.

    I have ordered (should arrive tomorrow) a N100 which I’m going to manually migrate the useful services to one at a time. Once that is doing I’ll probably switch to XigmaNAS so I can stick with FreeBSD. (I’ve always preferred FreeBSD). That will leave my NAS as just file storage for a while, though depending on how I like XigmaNAS I might or might not run services on that.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 小时前

      The good news is this still works despite no updates it does everything it used to. There is almost zero reason to update any working NAS if it is behind a firewall.

      if all users and devices on the network are well behaved and don’t install every random app, even if from the play store, then yeah, it’s less of a risk

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        3
        ·
        13 小时前

        only the most basic security. It is out of date according to the pkg system and so jails cannot be updated-

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    23 小时前

    Docker: More or less automatically upgraded (compose)
    Proxmox/TrueNas: My setup breaks so often or I want to do something that I will check it every once in a while and run updates
    Main Debian NAS: Automatic updates. (apt)
    Raspberry Pi: Automatic Updates (apt)
    Windows: If it prompts me and I am shutting it down amyway: Fine. Thanks for notifying.

    I stopped chassing updates quite some time ago.

  • vividspecter@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 天前

    I use NixOS so if an update breaks, I just roll back. And since it’s effectively a rolling release distribution there isn’t any risk of being left behind on an outdated version.

    • Object@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 天前

      Same here. I spent last month transitioning all my servers to NixOS and it feels so comfy! I do a small test on my desktop when I do something that might break stuff first, and then add it to server’s config later.

      --target-host and --use-remote-sudo makes it even better too.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    1 天前

    Constant maintenance? What’s that?

    Here’s my setup:

    • OS - openSUSE Leap - I upgrade when I remember
    • software - Docker images in a docker compose file; upgrading is a simple docker command, and I’ll only do it if I need something in the update
    • hardware - old desktop; I’ll upgrade when I have extra hardware

    I honestly don’t think about it. I run updates when I get to it (every month or so), and I’ll do an OS upgrade a little while after a new release is available (every couple years?). Software gets updated periodically if I’m bored and avoiding more important things. And I upgrade one thing at a time, so I don’t end up with everything breaking at once. BTRFS snapshots means I can always roll back if I break something.

    I don’t even know what TrueCharts is. Maybe that’s your issue?

  • drkt@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 天前

    For one I don’t use software that updates constantly. If I had to log in to a container more than once a year to fix something, I’d figure out something else. My NAS is just harddrives on a Debian machine.

    Everything I use runs either Debian or is some form of BSD

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 天前

      Same, but openSUSE. Tumbleweed on my desktop and laptop, Leap on my servers.

      And yeah, if I need to babysit something, I’ll use an alternative. I’ll upgrade when I’m ready to, which is usually over holidays when I’m bored and looking for a project.

  • TedZanzibar@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 天前

    Yeah, everything that’s already been said, except that I specifically chose an off-the-shelf Synology NAS with Docker support to run my core setup for this exact reason. It needs a reboot maybe once or twice a year for critical updates but is otherwise rock solid.

    I have since added a small N100 box for things that need a little extra grunt (Plex mainly) but I run Ubuntu Server LTS with Docker on that and do maintenance on it about as often as I reboot the NAS.