So I am trying to install the Owncloud Docker Image within rootless Podman, but I’ve hit a snag. The container never starts, and the logs indicate to me that it’s caused by a user namespace conflict as a result of running the container within a pod using rootless Podman.
Creating hook folders...
Waiting for MySQL...
services are ready!
Waiting for Redis...
services are ready!
Writing config file...
Fixing base perms...
chown: changing ownership of '/var/www/owncloud': Operation not permitted
chown: changing ownership of '/var/www/owncloud/custom': Operation not permitted
chown: changing ownership of '/var/www/owncloud/config': Operation not permitted
I’ve spent hours trying to troubleshoot this issue, but I’ve made little headway. Perhaps this is an issue derived from the way in which I’m using Podman as opposed to an Owncloud issue, but I hoped some specific insight into the Owncloud container could help me troubleshoot this issue. I can run the container with the entrypoint set to /bin/bash as opposed to whatever is defined in the Dockerfile with which the container was created, and I notice that I am dropped into the /var/www/owncloud directory as a user with a userid and groupid of 1022 and 1023, respectively. This is expected because I set it:
/usr/bin/podman pod create --userns keep-id --publish 10.0.0.2:1080:8080/tcp --name owncloud-services
The directories from the container logs above are owned by root, so it’s clear what’s happening here. The userns parameter being set to keep-id prevents the user within the container from accessing the content local to the container. However, I am setting the user namespace within the container to that of my user on the host to ensure that the container has access to volumes mounted on the host, and it doesn’t change the permissions of the mounted volumes on the host. Maybe there is a better way to do what I am trying to accomplish?
P.S. I have run this with docker compose before, but I’ve switched to Podman because I like its architecture and featureset better than that of Docker.
I don’t think you can do that with the --userns keep-id option. keep-id means that userid of the user starting the container will be mapped to the same id inside the container so you’re running as user 1022. The chown commands whoever require you to be running as root inside the container.
I think the default behaviour of podman when running rootless is to map the user starting a container to root (0) inside the container. So running without any --userns options should do the trick. Note: Depending on how you mounted/created the volumes you might need to change their ownership on host to that the user starting the container (so that they are own by root inside the container).
Without setting the userns option to keep id, the following happens:
From the host, all accesses from root in the container (UID 0) will appear to be from UID 100000. Inside the container, any file owned by user 100000 on the host will appear as owned by UID 0 (root). An interesting question is what happens to users not mapped into the container—what if I mount a volume owned by user 1001 on the host into a container using the user namespace I described? The kernel, in this case, will map any UID or GID not valid in the namespace to UID/GID 65534, a user and group called nobody, which is not a normal group.
Excerpt from www. redhat. com/sysadmin/debug-rootless-podman-mounted-volumes
Volumes are owned by the user who is starting the container. Everything is done within that user’s home directory.
Yes, but the scripts running at startup of the container try to change the ownership of the data directory to the www-data user inside the container. That does only work if the chown command is running as root inside the container. keep-id does prevent that.
From the same blogpost you pasted:
by default, Podman maps the user running the container to root in the container—so now we’ll be accessing the volume as UID/GID 1000 on the host, despite being root in the container.
As said. As far as I understand, the entrypoint script needs to be running as root inside of the container for the chown to work.
Yep, and that is the problem. The scripts change the ownership to a user that doesn’t exist on the host. I had mounted /var/www/owncloud onto the host, but I don’t remember why. However, even if I unmount it, I still have to contend with the Mariadb and the Redis containers and the data directory in the owncloud container that I also mount from the host. For some reason, the directories mounted into them also caused the owner and group of the directories on the host to change if the host user namespace isn’t passed into the container. I can’t set a different userns parameter on each container differently because they’re running in a single pod. This part is new to me. For other containerized services I run on my homelab, I’ve only run a single container as opposed to a group of them in a pod. Podman pods are new to me; as far as I know, Docker doesn’t have similar feature.
Essentially, what you’re telling me to do won’t work. I’ll have to find another way. I have an idea, but I’ll have to explore it further when I get back to my desk at home.
I do not believe that would work with the podman pod create command. That is the way to do it with standalone containers. Also consider that multiple containers are running in the same pod, so the other containers may not eve have a user and group with a uid and gid of 1000. If they do, it may completely break them.
Oh I see. You were able to set a different user namespace for each container running within a pod on Fedora 40. I’d wager that’d work on my Fedora 40 Workstation, but on the RHEL server, the Podman version is too old to have some of the latest features. I learned the hard way before this that my version of Podman Quadlet doesn’t support .pod files yet.
I am going to try to use a Podman Network instead of running the containers in a single pod. I think that might be the path forward. I’ll get back to you guys with an update later.
So fundamentally, here lies my issue. It has always been this issue, but I’ve only been trying to work around it. The container runs as root, so I need to map my UID and GID to the root user in the container. However, the container chowns everything to www-data:root. Which means that when I mount ${HOME}/files on the host into the container’s /mnt/data, the owner is changed to an owner that does not exist on the host. IMO, this is fundamentally flawed container design. It means that I can’t even fix it via supplemental group permissions on the host. I would mind it far less if the chown was root:www-data within the container. And creating a user to map into the container’s www-data would not work either since the owncloud user on the host does not have permissions to map another user namespace that is not its own.
Why do you think the user needs to exist on the host? It doesn’t. You can just use the numeric ids of the mapped users. So if www-data (33) in the container is mapped to the numeric id 1000001 on the host you can just use 1000001 if you need to chown anything on the host. The same is true for group mappings.
I’m not sure I quite understand what you’re getting at. Also, the owncloud user on the host doesn’t have any sudo rights. Secondly, my goal is to actually bind mount a directory from a different location to within this directory. I don’t want the container changing ownership on that directory as I intend to give the owncloud user access to that directory and its files via supplementary group permissions.
And using --userns keep-id,uid=33,gid=33 similar to what @chrismaster suggested it might even map the local user starting the container to the right uid inside the container to have the correct permission on the volumes. I haven’t tried that though. But (I am by not means an export on the owncloud docker images) there might be other things in the scripts called at startup that might need root permission, even if you get past the chown stuff.