Microstack dashboard stops working after a while

Hi,

Previously I installed MicroStack on 20.10 with the the edge’s snapd and it worked well. But after a few moment, maybe 1 day later, I can not login via the dashboard, and it reported the following messages on web (http://10.20.20.1):

FileNotFoundError at /auth/login/

[Errno 2] No such file or directory

Request Method: 	GET
Request URL: 	http://10.20.20.1/auth/login/?next=/
Django Version: 	2.2.12
Exception Type: 	FileNotFoundError
Exception Value: 	

[Errno 2] No such file or directory

Exception Location: 	/usr/lib/python3.8/posixpath.py in abspath, line 379
Python Executable: 	/snap/microstack/218/bin/uwsgi
Python Version: 	3.8.5
Python Path: 	

['.',
 '',
 '/var/snap/microstack/common/etc/horizon/uwsgi/snap',
 '/usr/lib/python3.8',
 '/usr/lib/python3/dist-packages',
 '/snap/microstack/218/usr/lib/python3.8',
 '/snap/microstack/218/lib/python3.8/site-packages',
 '/snap/microstack/218/usr/lib/python3/dist-packages',
 '/snap/microstack/218/usr/lib/python38.zip',
 '/snap/microstack/218/usr/lib/python3.8/lib-dynload',
 '/snap/microstack/218/lib/python3.8/site-packages/openstack_dashboard']

Server time: 	Tue, 10 Nov 2020 12:04:49 +0000

Basically I didn’t start creating any instance yet, and just ended at step4. Any information I need to provide?

$ snap services
Service                                Startup   Current   Notes
microstack.cinder-backup               disabled  inactive  -
microstack.cinder-scheduler            enabled   active    -
microstack.cinder-uwsgi                disabled  active    -
microstack.cinder-volume               disabled  inactive  -
microstack.cluster-uwsgi               enabled   active    -
microstack.external-bridge             enabled   inactive  -
microstack.filebeat                    disabled  inactive  -
microstack.glance-api                  disabled  active    -
microstack.horizon-uwsgi               enabled   active    -
microstack.iscsid                      disabled  inactive  -
microstack.keystone-uwsgi              disabled  active    -
microstack.libvirtd                    enabled   active    -
microstack.load-modules                enabled   inactive  -
microstack.memcached                   enabled   active    -
microstack.mysqld                      enabled   active    -
microstack.neutron-api                 disabled  active    -
microstack.neutron-ovn-metadata-agent  disabled  active    -
microstack.nginx                       disabled  active    -
microstack.nova-api                    disabled  active    -
microstack.nova-api-metadata           disabled  active    -
microstack.nova-compute                disabled  active    -
microstack.nova-conductor              disabled  active    -
microstack.nova-scheduler              disabled  active    -
microstack.nova-spicehtml5proxy        disabled  active    -
microstack.nrpe                        disabled  inactive  -
microstack.ovn-controller              enabled   active    -
microstack.ovn-northd                  enabled   active    -
microstack.ovn-ovsdb-server-nb         enabled   active    -
microstack.ovn-ovsdb-server-sb         enabled   active    -
microstack.ovs-vswitchd                enabled   active    -
microstack.ovsdb-server                enabled   active    -
microstack.placement-uwsgi             disabled  active    -
microstack.rabbitmq-server             enabled   active    -
microstack.registry                    disabled  active    -
microstack.setup-lvm-loopdev           disabled  inactive  -
microstack.target                      disabled  inactive  -
microstack.telegraf                    disabled  inactive  -
microstack.virtlogd                    enabled   active    -

Same here. Worked fine last night, but this morning, it’s dead, and with (at least superficially) the same error. Unlike the OP, I had created and torn down instances, networks, etc. It still responds to “microstack.openstack” calls, so it’s not a foundational issue, e.g., MySQL is clearly still working, as are the corresponding services. Rebooting does seem to have “fixed” it, but also seems, well, a sub-optimal solution.

Hello,
I’m having the exact same same issue. Did somebody figure out a resolution?
Thank you.

FileNotFoundError at /auth/login/

[Errno 2] No such file or directory

Request Method: GET
Request URL: http://192.168.79.129/auth/login/?next=/
Django Version: 2.2.12
Exception Type: FileNotFoundError
Exception Value: [Errno 2] No such file or directory
Exception Location: /usr/lib/python3.8/posixpath.py in abspath, line 379
Python Executable: /snap/microstack/222/bin/uwsgi
Python Version: 3.8.5
Python Path: [’.’, ‘’, ‘/var/snap/microstack/common/etc/horizon/uwsgi/snap’, ‘/usr/lib/python3.8’, ‘/usr/lib/python3/dist-packages’, ‘/snap/microstack/222/usr/lib/python3.8’, ‘/snap/microstack/222/lib/python3.8/site-packages’, ‘/snap/microstack/222/usr/lib/python3/dist-packages’, ‘/snap/microstack/222/usr/lib/python38.zip’, ‘/snap/microstack/222/usr/lib/python3.8/lib-dynload’, ‘/snap/microstack/222/lib/python3.8/site-packages/openstack_dashboard’]
Server time: Sat, 6 Mar 2021 18:38:56 +0000

Happened to me as well. It had been more than 1 week since I last used it, and there was a reboot in between, but judging from other responses, I see that it can happen at any time.

Aha, looks like https://bugs.launchpad.net/microstack/+bug/1910300 and for me, a
sudo systemctl restart snap.microstack.horizon-uwsgi brought it right back.

I ran into this problem as well and was able to correlate it back to a snapd update. The odd thing was the FileNotFoundError, which was actually raised inside due to a FileNotFoundError for os.getcwd(). It appears that something in the snapd refresh is causing the current working directory to change. I’ve posted some info on the bug, but as @mbeierl states - a workaround is to restart the snap.microstack.horizon-uwsgi service.

That is exactly the expected behaviour. During an update the SNAP_DATA and SNAP_USER_DATA variables start pointing to the new (versioned) location (after all data in them got copied forward) and the /current symlinks pointing to them do get updated too. Then apparmor and the namespace get updated to only allow access to the new location …

in normal operation snapd stops all services from a snap before that process and starts them after it is fully done … smells like this did not happen here for some reason …