If I read this correctly, Immich is setup entirely through Ansible, no docker compose. That's fine, however if Immich changes something drastically in their setup topology, it'll be more work for you to implement those changes. For services that use docker compose, you could use Ansible to deploy a compose file in a dir, say /opt/immich-docker
along with its requisite .env
and other files. Then setup running it via systemd. Then when you need to update it, it's almost copy-paste from the upstream compose file into your Ansible repo.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (donβt cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Heck, you could do a pre-stage play where you delegate to localhost an ansible.builtin.get_url
to download the compose file before doing the rest.
I wouldn't do that because I'd be inevitably picking up breaking changes without my knowledge that I'd have to fix after the fact. Unless you're pulling from a tag I guess. Still storing along the playbook feels more robust. It's less likely to get any surprises. Also I'm working under the assumption that you want to write idempotent code so you don't get breakage when your rerun it, which allows to run it on a schedule, to ensure your config doesn't drift too much.
Nice work!
Thank you! π
I'm unsure but I see secret.yml in there. Is that sensitive? You might want to update that ASAP if it is.
Looks like it's encrypted with ansible-vault
If you look inside the file you will see that it's an encrypted file created via ansible-vault
Nice, well done. I wish I could find the same for Debian.
DebOps my dude.
Thanks!
Thx I had no idea!
It should be pretty easy to adapt it for Debian. The only thing you need to change as far as I can see is the usage of the dnf module to the apt module.
If you want to make your playbooks/roles more universal, there's a generic package module which will figure out what package manager to use based on the detected OS.
Or, if that doesn't fit your needs, you can add conditions to tasks (or blocks of tasks), like
when: ansible_os_family == "Debian"
and use that for tasks specific to a given Linux distro/family.
Ansible will detect a lot of info about each host and make it available as facts. See for example https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_vars_facts.html
I know but I also learned that it's generally better to use the specific module for the package manager (just can't remember why from the top of my head) and I never intended this playbook to be generally usable.
I'm curious how using ansible to deploy docker containers is easier than just using docker compose?
Ansible makes sense to setup the OS the way it needs to be (file systems, folder structure etc), but why make every container through ansible instead of just making a docker compose and maybe having ansible deploy that?
Even easier is probably to just run something like portainer and run the compose file through there
just making a docker compose and maybe having ansible deploy that?
that's what I do, why ansible? Because it makes it easier to deploy the same service in different servers with slightly different configurations, for example when migrating from one server to another. Also it helps with having something I can easily backup (e.g. git repo) that can redo my server(s) if needed.
That being said I'm still setting everything up with ansible.
In a similar situation. I was using Open Media Vault but it has some networking bug that I just can't nail down or work around. I have to manually fix the networking every time it breaks. Otherwise I barely used OMV features and did most things through Docker. I'll be switching to Diet Pi and probably Ansible unless I feel like learning Puppet.
Iβm curious what issues you had with TrueNAS? Iβve been using it for about a year now and the only issue I have had has been with one of my pools deleting itself after a reboot, but that was user error because I put the wrong SED password in the settings.
The apps service just borked itself and I couldn't get it to properly start anymore. Also deploying apps always took a ridiculously and annoyingly long time (like about 15 minutes to deploy NPM).