(Pardon the lack of the template, I couldn’t get what I wanted to say out when using it.)
I’m trying to learn Kubernetes. I’ve successfully deployed a cluster, so now I’m trying to deploy OwnCloud as a learning experiment. Specifically I’m deploying 3 replicas with Longhorn backed storage via a StatefulSet. MySQL 5.7 is running on a separate server.
The issue I’m running into is that one of the OwnCloud pods refuses to start. It’s owncloud container logs look like:
Creating volume folders...
Creating hook folders...
Waiting for MySQL...
services are ready!
Waiting for Redis...
services are ready!
Removing custom folder...
Linking custom folder...
Removing config folder...
Linking config folder...
Writing config file...
Fixing base perms...
Fixing data perms...
Fixing hook perms...
Installing server database...
The username is already being used
For comparison, here are the initial logs of one of the working pod’s owncloud container:
Creating volume folders...
Creating hook folders...
Waiting for MySQL...
services are ready!
Waiting for Redis...
services are ready!
Removing custom folder...
Linking custom folder...
Removing config folder...
Linking config folder...
Writing config file...
Fixing base perms...
Fixing data perms...
Fixing hook perms...
Upgrading server database...
ownCloud is already latest version
ownCloud is already latest version
Writing objectstore config...
Writing php config...
Updating htaccess config...
.htaccess has been updated
Writing apache config...
Enabling cron background...
Set mode for background jobs to 'cron'
Writing crontab file...
Touching cron configs...
Starting cron daemon...
Starting apache daemon...
....
< a whole bunch of apache logs >
When I initially set up the instance, I used the OWNCLOUD_ADMIN_USERNAME and OWNCLOUD_ADMIN_PASSWORD env variables. When I started seeing the The username is already being used
message, I removed those variables from my configuration.
I confirmed that neither variable is present on the failing pod via kubectl describe
.
I’m also seeing odd behavior with sessions. When I log in, the OwnCloud will refresh a few times and then log me out. Sometimes I get a browser message about something wrong with my cookies.
I could see the sessions behavior being due to me not yet figuring out how to tell K8S to send the same user to the same pod all the time.
But that failing container doesn’t make any sense to me. Shouldn’t it detect that OwnCloud is installed and just move on?
Let me know if there are any configuration or logs that would help.
Thanks in advance!
Info
OwnCloud official docker image 10.8.0.
Kubernetes 1.22
Ingress is ingress-nginx
Using MetalLB
CNI is Weave
Host OS is Ubuntu 20.04