I’m install Kubernetes from an ubuntu server hosted in openstack VM with 8vCPU, 16GB of RAM, 160GB storage. All ports are open in openstack and no iptables rules set.
I juju ssh etcd/0 and issued:
sudo snap install etcd
This was the result detailing the TLS issue.
http:// paste.ubuntu .com/25976912/
(Warning this has a lot of juju noise and is large)
Issuing: sudo snap install core
Results in a similar error:
ubuntu@juju-8ea5d5-2:~$ sudo snap install core|pastebinit
error: cannot perform the following tasks:
if this is ongoing, could you do a network capture?
The last time I saw something like this there were a bunch of RST that for some reason tripped up Go’s network stack, but not things like curl. I’d love to have a reproducer for that simpler than “share network over usb to a rpi and be in a poorly-connected part of France”.
@JamesBenson I seem to have lost you on IRC, and as I’m about to go offline for a bit I thought I’d write this down here.
First, take the URL from the error, and see whether you can download it with wget or curl. Note that you’ll usually have to quote it so the shell doesn’t get confused by the ampersand in the query string. In your example (but AFAIK that one won’t work now because it’s too old),
If that doesn’t work, then there’s an issue in your networking. If it does work however, download http://people.canonical.com/~john/gowget and try using that to download the URL. All it does is download the URL you give it, but it’s written in Go so it’s the same network stack.
If that fails with an error similar to the one snapd was giving you, repeat with http://people.canonical.com/~john/gowget19 (this is the same program, built against a newer Go version).
Next step after this would be to run gowget while capturing the nework with wireshark.
As discussed on IRC, this was an MTU conflict. The OpenStack cloud in question is configured such that instances have an MTU of just 1450 rather than the standard 1500, but LXD always creates lxdbr0 with 1500. So containers were trying to use 1500-byte frames, which were being dropped by the host.
This can be diagnosed from ip link output on the LXD host. If the physical (well, virtual in this case) Ethernet interface has an MTU below 1500, but lxdbr0’s is still 1500, traffic from inside the containers will be flaky.
Running lxc network set lxdbr0 bridge.mtu 1450 before creating containers solves the problem. If you’ve already started containers, you’ll need to restart them or also change the MTU on the eth0 in each container.