this post was submitted on 22 Mar 2024
5 points (100.0% liked)

Selfhosted

39980 readers
781 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi, I've been thinking for a few days whether I should learn Docker or Podman. I know that Podman is more FOSS and I like it more in theory, but maybe it's better to start with docker, for which there is a lot more tutorials. On the other hand, maybe it's better to straight up learn podman when I don't know any of the two and not having to change habits later. What do you think? For context, I know how containers works in theory, I know some linux I think well, but I never actually used docker nor podman. In another words: If I want to eventually end up with Podman, is it easier to start with docker and then learn Podman, or start with Podman right away? Thanks in advance

top 37 comments
sorted by: hot top controversial new old
[–] RalphFurley@lemmy.world 1 points 7 months ago

I just downloaded Podman Desktop and am playing with this. Almost all videos and tutorials out there are for Docker but I'm going to watch those but actually use Podman instead to learn.

[–] redcalcium@lemmy.institute 1 points 7 months ago

It's easier to start with docker first simply because of the sheer amount of learning resources available on the internet. If you're having issues, you can usually find a solution quickly with a search engine.

That's being said, there's not much differences on how to use them these days. You can even run docker compose on podman.

[–] prettybunnys@sh.itjust.works 1 points 7 months ago

Docker is more ubiquitous, Podman has use cases that diverge from Docker.

Discover the use case and decide from there.

That said docker is a good starting point, their documentation is pretty great and once you know docker you’ll better appreciate why podman is different.

[–] lemmyvore@feddit.nl 1 points 7 months ago

Learn Docker first, it will be faster and easier. It will both give you an intro to containers and you'll get some practical use for your self hosting needs.

If you're still curious later you can go deeper into Podman and other container technology.

[–] BrianTheeBiscuiteer@lemmy.world 1 points 7 months ago (1 children)

They're very similar so you pretty much can't go wrong. Podman, I believe, is more secure by default (or aims to be) so might run into more roadblocks with its use.

[–] Kalcifer@sh.itjust.works 1 points 7 months ago

so might run into more roadblocks with its use.

This has been my experience with Podman. That's not to say that these roablocks aren't without reason, nor merit, but there is always a trade off of convenience when optimizing for security.

[–] PlusMinus@lemmy.world 1 points 7 months ago (2 children)

In case you haven't started yet. Learn docker, but use podman.

[–] crony@lemmy.cronyakatsuki.xyz 1 points 7 months ago* (last edited 7 months ago)

Docker and podman in general work the same, commands are the same, ...

Only biggest difference is that now that I'm trying to migrate from docker to podman is figting over volume binds permissions for databases and such.

Finished migration of 3 containers, 50+ left.

[–] 1371113@lemmy.world 1 points 7 months ago

OP, listen to this person. Docker will earn you cash. Podman is nicer to work with for your own shit.

[–] Decronym@lemmy.decronym.xyz 1 points 7 months ago* (last edited 7 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
LXC Linux Containers
Plex Brand of media server package
SSH Secure Shell for remote terminal access
nginx Popular HTTP server

[Thread #623 for this sub, first seen 23rd Mar 2024, 07:55] [FAQ] [Full list] [Contact] [Source code]

[–] Nibodhika@lemmy.world 0 points 7 months ago (1 children)

Docker, there are more resources for it and once you know it Podman should be an easy migration if you want to. Also I'm not sure about your claim that Podman is more FOSS than docker, it's "better" because it doesn't run as root, but other than that I don't know of any advantages to it that are not a derivation of "it runs as a regular user".

[–] Kalcifer@sh.itjust.works 1 points 7 months ago

Podman is [...] “better” because it doesn’t run as root, but other than that I don’t know of any advantages to it that are not a derivation of “it runs as a regular user”.

Podman can run in rootless mode (with some caveats), but it is still able to run as root — it doesn't only have the capability to run as a "regular user".

[–] lal309@lemmy.world 0 points 7 months ago (1 children)

Honestly, if you have never used containers before I would suggest starting with docker as it has more readily accessible beginner walk through and tutorials. From there, you will have a good idea as to switching to podman is the right move for you or not.

Personally, I started with docker and haven’t moved from there since I don’t see a need (yet). I have dozens of services running on docker. I don’t know how heavy of a lift it would be to learn podman but like I said, I don’t feel the need to do so.

Maybe try out both and see which one you like more?

Just to offer the other perspective. I started with podman years ago. I knew very little about containers and I would say it made the learbing curve a lot steeper. Most guides and README's use docker and when things didnt work I had to figure out if it was networking, selinux, rootless, not having the docker daemon, etc... without understanding fully what those things were because I didn't know docker. But when I started running stuff on kubernetes, it was really easy. Pods in podman are isomorphic to kubernetes pods. I think the pain was worth it, but it was definitely not easy at the time. Documentation, guides, and networking have improved since then, so it may not be as big of a deal now

[–] PoliticallyIncorrect@lemm.ee -1 points 7 months ago
[–] SpaceNoodle@lemmy.world -1 points 7 months ago (1 children)
[–] stepanzak@iusearchlinux.fyi 0 points 7 months ago (1 children)

Do you selfhost stuff on bare metal? I feel like most projects provide containers as their officially supported packages.

[–] SpaceNoodle@lemmy.world 0 points 7 months ago (1 children)

No, I use an operating system.

[–] victorz@lemmy.world 0 points 7 months ago (2 children)

Not sure why all the down votes without any explanation.

I too don't use docker for my services. I run Plex on my Arch install via the provided AUR package. 🤷‍♂️ Nobody told me I needed to do otherwise, with docker or anything else. Not sure why that would be better in any way. It could hardly be more performant? And it's as simple as enabling the service and forgetting about it.

[–] Nibodhika@lemmy.world 0 points 7 months ago (1 children)

Maybe they're having issues with his answer of "using an OS" which implies other people are not? IDK.

But as to you if you're running just one or two services from a machine you also use for other stuff using packages and systems services is perfectly fine. If you have dedicated hardware for it (or plan on having it), it starts to make sense to look at ways of making things easier for yourself in the long run. Docker solves lots of issues no one's talking about (because no one is facing them anymore), e.g.:

  • Different services requiring different versions of the same library/database/etc
  • Moving your service from one computer to another
  • Service requiring specific steps for updates (this is not entirely gone, but it's much better and it's prevents you from breaking your services by doing a random operation like updating your system)
  • Pinning versions of services until you decide to update without needing to sacrifice system updates for it (I know you can pin a version of a package, but if you don't upgrade it it will break when you upgrade it's dependencies)
  • Easily map ports or block access in a generic way, no need to discover how each service config file allows that, you can just do it at the container level. e.g. databases that can't be accessed from the network or even from within the host machine (I mean, they can obviously be accessed from the host system, just not in the traditional way, so a user who gains access to your machine on a user that's not allowed to use docker can't)
  • Isolation between services
  • Isolation from host machine
  • Reproducibility of services (i.e. one small docker compose file guarantees a reproducible host of services)
  • Endurance that no service is running as root (even if they only work as root)
  • Spin services in minutes to test stuff up and clean them out thoroughly in seconds.

There's probably many more reasons to use docker. Plus once you've learned it it's very easy for small self-hosted stuff so there's really no reason not to use it. Every time I see someone saying they don't use docker and don't understand why people use it I'm a bit baffled, it's like someone claiming he doesn't understand why people use knifes to cut bread when the two-handed axe he uses for chopping wood works (like, yes, it does work, but it's obviously not the best tool for the job)

[–] TCB13@lemmy.world 0 points 7 months ago (1 children)

Are you aware that all those isolation, networking, firewall etc. issues can be solved by simply learning how to write proper systemd units for your services. Start by reading this: https://www.redhat.com/sysadmin/mastering-systemd

[–] Nibodhika@lemmy.world 0 points 7 months ago (1 children)

Yes I'm aware of that, having written several systemd units for my own services in the past. But you're not likely to get any of that by default when you just install from the package manager as it's the discussion here, and most people will just use the default systemd unit provided, and in the vast majority of cases they don't provide the same level of isolation the default docker compose file does.

We're talking about ease of setting things up, anything you can do in docker you can do without, it's just a matter of how easy it is to get good standards. A similar argument to what you made would be that you can also install multiple versions of databases directly on your OS.

For example I'm 99% sure the person I replied to has this file for service:

[Unit]
Description=Plex Media Server
After=network.target network-online.target

[Service]
# In this file, set LANG and LC_ALL to en_US.UTF-8 on non-English systems to avoid mystery crashes.
EnvironmentFile=/etc/conf.d/plexmediaserver
ExecStart=/usr/lib/plexmediaserver/Plex\x20Media\x20Server
SyslogIdentifier=plexmediaserver
Type=simple
User=plex
Group=plex
Restart=on-failure
RestartSec=5
StartLimitInterval=60s
StartLimitBurst=3

[Install]
WantedBy=multi-user.target

Some good user isolation, but almost nothing else, and I doubt that someone who argued that installing from the package manager is easier will run systemctl edit on what he just installed to add extra security features.

[–] TCB13@lemmy.world 0 points 7 months ago* (last edited 7 months ago) (1 children)

But you’re not likely to get any of that by default when you just install from the package manager as it’s the discussion here,

This is changing... Fedora is planning to enable the various systemd services hardening flags by default and so is Debian.

We’re talking about ease of setting things up, anything you can do in docker you can do withou

Yes, but at what cost? At the cost of being overly dependent on some cloud service / proprietary solution like DockerHub / Kubernetes? Remember that the alternative is packages from your Linux repository that can be easily mirrored, archived offline and whatnot.

[–] Nibodhika@lemmy.world 0 points 7 months ago (1 children)

You're not forced to use dockerhub or kubernetes, in fact I use neither. Also if a team chooses to host their images on dockerhub that's their choice, it's like saying git is bad because Microsoft owns GitHub, or that installing software X from the repos is better than compiling because you need to use GitHub to get the code.

Also docker images can also be easily mirrored, archived offline etc, and they will keep working after the packages you archived stop because the base version of some library got updated.

[–] TCB13@lemmy.world 0 points 7 months ago (1 children)

Yet people chose to use those proprietary solutions and platforms because its easier. This is just like chrome, there are other browser, yet people go for chrome.

It’s significantly hard to archive and have funcional offline setups with Docker than it is with an APT repository. It’s like an hack not something it was designed for.

[–] Nibodhika@lemmy.world 0 points 7 months ago (1 children)

It's definitely much easier to do that on docker than with apt packages, and docker was designed for thst. Just do a save/load, https://docs.docker.com/reference/cli/docker/image/save/ and like I mentioned before this is much more stable than saving some .deb files which will break the moment one of the dependencies gets updated.

Most people will use whatever docker compose file a project shows as default, if the project hosts the images on dockerhub that's their choice. Plus I don't understand what's the problem, GitHub is also proprietary and no one cares that a project is hosted there.

[–] TCB13@lemmy.world 0 points 7 months ago (1 children)

It’s definitely much easier to do that on docker than with apt packages,

What a joke.

Most people will use whatever docker compose file a project shows as default, if the project hosts the images on dockerhub that’s their choice

Yes and they point the market in a direction that affects everyone.

GitHub is also proprietary and no one cares that a project is hosted there.

People care and that's why there are public alternatives such as Codeberg and the base project Gitea.

[–] Nibodhika@lemmy.world 1 points 7 months ago

Got it, no one should use software hosted on GitHub, you're either a teenager who just discovered Linux a couple of years ago or a FOSS fundamentalist, in any case I've had the personal policy of not to waste time with either for over 20 years.

[–] SpaceNoodle@lemmy.world 0 points 7 months ago* (last edited 7 months ago) (1 children)

People love to hate on people who don't care for containers.

Also, I'm guessing that nobody here actually knows what it means to run code on bare metal.

What you're doing is fine. No need to make life harder for yourself.

[–] victorz@lemmy.world 0 points 7 months ago (1 children)

People love to hate on people who don't care for containers.

Maybe so. 😕

what it means to run code on bare metal

I'm guessing it means something slightly different than what most people think, namely to just run it in the OS. Would you explain to me what it really means?

[–] SpaceNoodle@lemmy.world 1 points 7 months ago

Bare metal would mean without an OS to manage peripherals, resources, even other tasks - like you might find on a resource-constrained embedded system.

[–] Shimitar@feddit.it -1 points 7 months ago (1 children)

Here goes my experience.

When I started the self hosted trip, I was against containers and tried to avoid them at all costs. Then I learned about containers, and now I still am against containers but less vividly so. I have used them and still use them.

Containers are good for the self hoster because they deliver fast deploy and easy testing of lots of services quickly. They are good for developers because they can provide one common installation approach that reduces greatly user issues and support requests.

But containers also have downsides as well. First of all they make the user dumber. Instead of learning something new, you blindly "compose pull & up" your way. Easy, but it's dumbifier and that's not a good thing. Second, there is a dangerous trend where projects only release containers, and that's bad for freedom of choice (bare metal install, as complex as it might be, need to always be possible) and while I am aware that you can download an image and extract the files inside, that's more an hack than a solution. Third, with containers you are forced to use whatever deployment the devs have chosen for you. Maybe I don't want 10 postgres instances one for each service, or maybe I already have my nginx reverse proxy or so. I have seen projects release different composer files for different scenarios, but at that point I would prefer to deploy on bare metal.

Said so, containers are not avoidable today, so study and embrace them, you will not be disappointed as its a cool piece of tech. But please stay clear of docker and go podman instead. Podman doesn't rely on a potentially insecure socket and does not require an always running daemon. Podman also by default doesn't force you to run services as root which you should never do. Also, networking feels clearer on podman and podman feels more .modern by using nft instead of iptables. Yes most of this can be fixed on docker, but since podman is a drop in replacement, why bother? Also, podman is truly open source while docker, shockingly, its not.

Here is my wiki page on the subject: https://wiki.gardiol.org/doku.php?id=gentoo:containers feel free to read it.

One last thought: updating containers should not be taken lightly. Its so easy and fast that you might be tempted to setup cron jobs or install watchtower, but you will end sooner or later with a broken service and lost data. So backup, always backup, and keep updating with rationale.

Tldr: containers are unavoidable today and are a cool piece of tech worth investigating. Don't blindly use them as there are security issues involved, and I hope the trend of making containers the only way doesn't take hold, because containers also make self hosters dumber and that's not good.

[–] mea_rah@lemmy.world 0 points 7 months ago (1 children)

I don't agree with the premise of your comment about containers. I think most of the downsides you listed are misplaced.

First of all they make the user dumber. Instead of learning something new, you blindly "compose pull & up" your way. Easy, but it's dumbifier and that's not a good thing.

I'd argue, that actually using containers properly requires very solid Linux skills. If someone indeed blindly "compose pull & up" their stuff, this is no different than blind curl | sudo bash which is still very common. People are going to muddle through the installation copy pasting stuff no matter what. I don't see why containers and compose files would be any different than pipe to bash or random reddit comment with "step by step instructions". Look at any forum where end users aren't technically strong and you'll see the same (emulation forums, raspberry pi based stuff, home automation,..) - random shell scripts, rm -rf this ; chmod 777 that

Containers are just another piece of software that someone can and will run blindly. But I don't see why you'd single them out here.

Second, there is a dangerous trend where projects only release containers, and that's bad for freedom of choice

As a developer I can't agree here. The docker images (not "containers" to be precise) are not there replacing deb packages. They are there because it's easy to provide image. It's much harder to release a set of debs, rpms and whatnot for distribution the developer isn't even using. The other options wouldn't even be there in the first place, because there's only so many hours in a day and my open source work is not paying my bills most of the time. (patches and continued maintenance is of course welcome) So the alternative would be just the source code, which you still get. No one is limiting your options there. If anything the Dockerfile at least shows exactly how you can build the software yourself even without using docker. It's just bash script with extra isolation.

I am aware that you can download an image and extract the files inside, that's more an hack than a solution.

Yeah please don't do that. It's probably not a good idea. Just build the binary or whatever you're trying to use yourself. The binaries in image often depend on libraries inside said image which can be different from your system.

Third, with containers you are forced to use whatever deployment the devs have chosen for you. Maybe I don't want 10 postgres instances one for each service, or maybe I already have my nginx reverse proxy or so.

It might be easier (effort-wise) but you're certainly not forced. At the very least you can clone the repo and just edit the Dockerfile to your liking. With compose file it's the same story, just edit the thing. Or don't use it at all. I frequently use compose file just for reference/documentation and run software as a set of systemd units in Nix. You do you. You don't have to follow a path that someone paved if you don't like the destination. Remember that it's often someone's free time that paid for this path, they are not obliged to provide perfect solution for you. They are not taking anything away from you by providing solution that someone else can use.

[–] Shimitar@feddit.it 1 points 7 months ago

I fully agree with you that devs should not release debs&rpms&etc, that's distro responsibility to create and manage from the binaries that the devs should release. No Dev should have to create those distro-bases formats, it's evil and useless.

Let me be more clear: devs are not required to release binaries at all. Bit they should, if they want their work to be widely used. And in this case, providing also a binary release alongside images solves all freedom of choice issues in my opinion. Here you show me my lack of preparedness as I didn't considered docker files as actual build instructions, I will do in the future.

I also fully agree with you that curl+pipe+bash random stuff should be banned as awful practice and that is much worse than containers in general. But posting instructions on forums and websites is not per se dangerous or a bad practice. Following them blindly is, but there is still people not wearing seatbelts in cars or helmets on bikes, so..

I was not single containers out, I was replying to a post about containers. If you read my wiki, every time a curl/pipe/bash approach is proposed, I decompose it and suggest against doing that.

Chmod 777 should be banned in any case, but that steams from containers usage (due to wrongly built images) more than anything else, so I guess you are biting your own cookie here.

Having docker files and composer file is perfectly acceptable. What is not acceptable is having only those and no binary releases. Usually sources are available (in FOSS apps at least) but that can be useless if there are no building instructions provided or the app uses some less common build stack.

On Immich, which is a perfect example of an amazing piece of software fast growing and very polished, I did try to build from sources but I couldn't manage the ML part properly. This is indeed due to my lack of experience with the peculiar stack they are using, but some build instructions would have been appreciated greatly (now I realize I should have started from the docker files). I gave up and pulled the images. No harm done, but little extra fun for me, and while I do understand the devs position, they too keep talking about making a living out of it and that's a totally different point to discuss on a different thread. I would suggest them that public relations and user support is more important than actually releasing an amazing product for making a living out of it. But that's just my real world experience as product manager.

In a world where containers are the only proposed solution, I believe something will be taken from us all. Somebody else explained that concept better then me in this thread. That's all.

[–] yournamehere@lemm.ee -3 points 7 months ago (2 children)

if you havent started: none

use nixOS.

[–] mea_rah@lemmy.world 1 points 7 months ago

I'm huge fan of Nix, but for someone wondering if they should "learn docker" Nix is absolutely brutal.

Also IMO while there's some overlap, one is not a complete replacement for the other. I use both in combination frequently.

[–] pete_the_cat@lemmy.world 1 points 7 months ago

I had an interview the other day and was surprised to hear that The University of Miami is actually using Nix for about 16 of their machines. I haven't used Nix yet, but thanks to everyone talking about it I could tell them the benefits of using it haha