Container (in Kubernetes) fails due to "The username is already being used" when pointed at existing install

(Pardon the lack of the template, I couldn’t get what I wanted to say out when using it.)

I’m trying to learn Kubernetes. I’ve successfully deployed a cluster, so now I’m trying to deploy OwnCloud as a learning experiment. Specifically I’m deploying 3 replicas with Longhorn backed storage via a StatefulSet. MySQL 5.7 is running on a separate server.

The issue I’m running into is that one of the OwnCloud pods refuses to start. It’s owncloud container logs look like:

Creating volume folders...
Creating hook folders...
Waiting for MySQL...
services are ready!
Waiting for Redis...
services are ready!
Removing custom folder...
Linking custom folder...
Removing config folder...
Linking config folder...
Writing config file...
Fixing base perms...
Fixing data perms...
Fixing hook perms...
Installing server database...
The username is already being used

For comparison, here are the initial logs of one of the working pod’s owncloud container:

Creating volume folders...
Creating hook folders...
Waiting for MySQL...
services are ready!
Waiting for Redis...
services are ready!
Removing custom folder...
Linking custom folder...
Removing config folder...
Linking config folder...
Writing config file...
Fixing base perms...
Fixing data perms...
Fixing hook perms...
Upgrading server database...
ownCloud is already latest version
ownCloud is already latest version
Writing objectstore config...
Writing php config...
Updating htaccess config...
.htaccess has been updated
Writing apache config...
Enabling cron background...
Set mode for background jobs to 'cron'
Writing crontab file...
Touching cron configs...
Starting cron daemon...
Starting apache daemon...
....
< a whole bunch of apache logs >

When I initially set up the instance, I used the OWNCLOUD_ADMIN_USERNAME and OWNCLOUD_ADMIN_PASSWORD env variables. When I started seeing the The username is already being used message, I removed those variables from my configuration.

I confirmed that neither variable is present on the failing pod via kubectl describe.

I’m also seeing odd behavior with sessions. When I log in, the OwnCloud will refresh a few times and then log me out. Sometimes I get a browser message about something wrong with my cookies.

I could see the sessions behavior being due to me not yet figuring out how to tell K8S to send the same user to the same pod all the time.

But that failing container doesn’t make any sense to me. Shouldn’t it detect that OwnCloud is installed and just move on?

Let me know if there are any configuration or logs that would help.

Thanks in advance!

Info

OwnCloud official docker image 10.8.0.
Kubernetes 1.22
Ingress is ingress-nginx
Using MetalLB
CNI is Weave
Host OS is Ubuntu 20.04

To update things, I rebuilt my test cluster based on Cilium for my CNI, and updated to Kube 1.23.

A fresh install of OwnCloud, this time without setting the admin username and password env variables, runs into exactly the same issue as in my first post.

Did you also reset the database in your MySQL installation? Something like DROP DATABASE owncloud; CREATE DATABASE owncloud;?

Yep. Dropped the db and recreated it. Then started the containers in Kube after that.

1 Like

This is still an issue for me. Anyone have any suggestions?

I assume the ownCloud’s data directories of the replicas are shared… Anyway, it seems a possible race condition, so maybe you could try to run the installation process with just one ownCloud and setup the 3 replicas afterwards.

But that failing container doesn’t make any sense to me. Shouldn’t it detect that OwnCloud is installed and just move on?

I think it does that, but with 3 replicas running at the same time and without an “installing” state, all the replicas assume they need to install ownCloud, so I guess 2 of them fail. The first replica doesn’t have the chance to install everything and mark ownCloud as installed before the next replicas check for the installation

Nope. I’m using Longhorn so each instance gets it’s own volume.

I’m using a Kubernete’s StatefulSet to deploy. So, from my limited experience with that construct, it waits until the first instance of ownCloud is up and running before starting the next. So ownCloud should be fully installed before the next container is started. Plus, in that case, wouldn’t the error go away once I successfully installed and restarted the failing containers?

I guess I should dig into the image and see how ownCloud actually detects things…

Ok. So, after digging through the docker images and just running an instance to look at all the different scripts, I think I have tracked down part of how OwnCloud detects if it’s installed or not.

This function: https://github.com/owncloud-docker/base/blob/06d29c8ea39d09988809883f34464e8d312522e4/v20.04/overlay/usr/bin/owncloud#L52 is called when the /etc/owncloud.d/30-install.sh file is sourced. Which, as far as I can tell, happens every time the image starts up.

That function calls occ -V.

I haven’t tracked down exactly how occ -V checks if things are installed, but I’d be willing to bet that it has something to do with this function https://github.com/owncloud/core/blob/a8b0dd3d164da66e060290e23b278817d444d15d/lib/base.php#L273 in base.php.

If that’s the function responsible, then ownCloud is only checking if the config value for installed is set.

In that case, then, you’d never be able to use more than one instance of ownCloud unless they were sharing their config file.

Which would be the case if Kubernetes was mounting the replicated data volume before ownCloud checked if things were installed. But Kube isn’t doing that. And I’m not sure why… I’d think you would mount the volumes as part of the initialization process, right?

Ok, at this point I have something to look into. If anyone has any insights, feel free to share!

This looks bad. I’m quite convinced the data directory and specially the config directory needs to be shared among the replicas.

The “installed” state and many other configuration parameters such as the DB location must be shared, otherwise there is a risk for the replicas to have different configuration and it could cause serious problems.
In particular, for the “installed” state, replica 1 won’t find anything, so it will try to install ownCloud; replica 2 won’t find anything neither (container is isolated), so it will also try to install ownCloud, same for replica 3.
I don’t know how kubernetes handles these things, but I’m pretty sure 2 of the replicas are failing because they’re trying to install ownCloud when it’s already installed. They’re trying to prepare the DB when it’s already prepared by the replica 1.

Assuming both config and data directories are shared, and if the startup is sequential, replica 1 will install ownCloud, and both replica 2 and 3 will know that ownCloud has been installed, so they’ll skip the installation part.

Sorry, I totally forgot to mention that Longhorn replicates data between volumes. So the second and third OwnCloud instances should have exactly the same data as the first.

Turns out the issue was that I was using Longhorn wrong. A single PersistentVolumeClaim, plus a deployment seems to have resolved the issue. :slight_smile:

I had though I needed multiple volume claims which is why I used a statefulset with a volume claim template at first. But that doesn’t work. Longhorn replicates data within a single volume, not multiple like I had with my statefulset.

If you use a single PVC and a deployment, Longhorn makes sure to attach things correctly. At least I think. I’m still figuring all this out, so don’t take my word for it.

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.