docker toolbox for windows
start docker machine msys without issue,
$ dm start msys
Starting "msys"...
(msysdev) Check network to re-create if needed...
(msysdev) Waiting for an IP...
Machine "msys" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the docker-machine env
command.
can ssh to docker machine, docker is running fine inside,
$ docker-machine ssh msys ## . ## ## ## == ## ## ## ## ## === /"""""""""""""""""\___/ === ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~ \______ o __/ \ \ __/ \____\_______/ _ _ ____ _ _ | |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __ | '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__| | |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: dummy0:mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 6e:2e:fa:71:08:09 brd ff:ff:ff:ff:ff:ff 3: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:e2:1d:ac brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fee2:1dac/64 scope link valid_lft forever preferred_lft forever 4: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6c:92:01 brd ff:ff:ff:ff:ff:ff inet 192.168.99.100/24 brd 192.168.99.255 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe6c:9201/64 scope link valid_lft forever preferred_lft forever 5: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:6a:23:60:31 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:6aff:fe23:6031/64 scope link valid_lft forever preferred_lft forever docker@msys:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.0.2.2 0.0.0.0 UG 1 0 0 eth0 10.0.2.0 * 255.255.255.0 U 0 0 0 eth0 127.0.0.1 * 255.255.255.255 UH 0 0 0 lo 172.17.0.0 * 255.255.0.0 U 0 0 0 docker0 192.168.99.0 * 255.255.255.0 U 0 0 0 eth1
however cannot setup docker env, so docker local client not working at all,
$ docker-machine env msysdev Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.99.100:2376": dial tcp 192.168.99.100:2376: i/o timeout You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'. Be advised that this will trigger a Docker daemon restart which might stop running containers. $ docker ps error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.31/containers/json: open //./pipe/docker_engine: The system cannot find the file specified. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
from host can ping to docker machine,
$ ping 192.168.99.100 Pinging 192.168.99.100 with 32 bytes of data: Reply from 192.168.99.100: bytes=32 time
minikube on window
start minikube without issue,
$ minikube start Starting local Kubernetes v1.8.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. $ minikube status minikube: Running cluster: Running kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.102 $ minikube ip 192.168.99.102
however kubectl local client not working,
$ kubectl get pods Unable to connect to the server: dial tcp 192.168.99.102:8443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
ping test is ok,
$ ping 192.168.99.102 Pinging 192.168.99.102 with 32 bytes of data: Reply from 192.168.99.102: bytes=32 time=1ms TTL=64 Reply from 192.168.99.102: bytes=32 time=1ms TTL=64 Reply from 192.168.99.102: bytes=32 time=1ms TTL=64 Reply from 192.168.99.102: bytes=32 time=1ms TTL=64 Ping statistics for 192.168.99.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 1ms, Average = 1ms
minikube ssh is working,
$ minikube ssh _ _ _ _ ( ) ( ) ___ ___ (_) ___ (_)| |/') _ _ | |_ __ /' __
\| |/' _\| || , < ( ) ( )| '_
\ /'__`\ | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/ (_) (_) (_)(_)(_) (_)(_)(_) (_)\___/'(_,__/'
\____) $ ip addr 1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:ca:0d:a3 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0 valid_lft 84384sec preferred_lft 84384sec inet6 fe80::a00:27ff:feca:da3/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:c3:2b:2f brd ff:ff:ff:ff:ff:ff inet 192.168.99.102/24 brd 172.17.17.255 scope global dynamic eth1 valid_lft 233sec preferred_lft 233sec inet6 fe80::a00:27ff:fec3:2b2f/64 scope link valid_lft forever preferred_lft forever 4: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1 link/sit 0.0.0.0 brd 0.0.0.0 5: docker0: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:f6:4a:6d:21 brd ff:ff:ff:ff:ff:ff inet 172.18.0.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:f6ff:fe4a:6d21/64 scope link valid_lft forever preferred_lft forever 7: veth1065234@if6: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether fe:98:cc:63:92:6a brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::fc98:ccff:fe63:926a/64 scope link valid_lft forever preferred_lft forever 9: vethf838f75@if8: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 1e:3e:fc:a3:13:c2 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::1c3e:fcff:fea3:13c2/64 scope link valid_lft forever preferred_lft forever $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.0.2.2 0.0.0.0 UG 1024 0 0 eth0 10.0.2.0 * 255.255.255.0 U 0 0 0 eth0 10.0.2.2 * 255.255.255.255 UH 1024 0 0 eth0 192.168.99.0 * 255.255.255.0 U 0 0 0 eth1 172.18.0.0 * 255.255.0.0 U 0 0 0 docker0
troubleshoot
the reason I put docker/minikube here since they are similar issue, local client docker and kubectl cannot talk with docker machine/minikube cluster which is in boot2docker vm in virtualbox.
- ping/ssh is working, check routing looks ok
- verified firewall is ON and cannot turn it off :(
There are tons of workaround regarding TLS verification toward docker machine or minikube cluster, they are not working at all if firewall ON.
The only possibility is to disable TLS verification, this is only for local docker test and education purpose.
remedy for docker
Before go to disable TLS verification, there are few options for docker:
- docker-machine ssh to docker vm, do everything inside vm, this is most of people doing when docker local client not working
- from host run docker command via ssh
ssh speed compares:
- docker-machine ssh machine_name, it is very slow
- dmssh is my small function script in portabledevops, I used same way to ssh via forward port but speed improved a lot
- nassh, it is a wrapper of docker native ssh, it is very fast
$ time docker-machine ssh msys docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES real 0m23.898s user 0m0.000s sys 0m0.437s $ time dmssh msys docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES real 0m3.147s user 0m0.212s sys 0m3.109s $ time nassh msys docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES real 0m1.333s user 0m0.000s sys 0m0.452s
so I prefer use nassh to run docker command from host, almost same speed like inside docker vm.
$ nassh msys docker run hello-world Hello from Docker!
open issue in github
- issue on docker/machine
<https://github.com/docker/machine/issues/4339> - issue on kubernetes/minikube
<https://github.com/kubernetes/minikube/issues/2316>
minikube-vpn
The remedy is from thegridman minikube/issues/1099#:
- setup port forward from host 8443 to minikube vm 8443 in virtualbox
- create new context minikube-vpn without TLS verify
This is best solution for minikube local cluster without TLS verify when running in firewall environment in case you cannot or don’t want to turn it off.
$ VBoxManage controlvm minikube natpf1 k8s-apiserver,tcp,127.0.0.1,8443,,8443 $ VBoxManage controlvm minikube natpf1 k8s-dashboard,tcp,127.0.0.1,30000,,30000 $ kubectl config set-cluster minikube-vpn --server=https://127.0.0.1:8443 --insecure-skip-tls-verify Cluster "minikube-vpn" set. $ kubectl config set-context minikube-vpn --cluster=minikube-vpn --user=minikube Context "minikube-vpn" created. $ kubectl config use-context minikube-vpn Switched to context "minikube-vpn". $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * minikube-vpn minikube-vpn minikube minikube minikube minikube
now kubectl can talk with minikube cluster as usual,
$ kubectl get pods No resources found.
kubectl will talk to 127.0.0.1:8443 instead of 192.168.99.102:8443 now,
$ kubectl get pods --v 7 I1214 02:04:04.661382 7184 loader.go:357] Config loaded from file C:\oldhorse\portableapps\msys64\home\username/.kube/config I1214 02:04:04.823382 7184 round_trippers.go:414] GET https://127.0.0.1:8443/api I1214 02:04:05.218882 7184 round_trippers.go:414] GET https://127.0.0.1:8443/api/v1/namespaces/default/pods I1214 02:04:05.219382 7184 round_trippers.go:421] Request Headers: I1214 02:04:05.223382 7184 round_trippers.go:424] Accept: application/json I1214 02:04:05.223882 7184 round_trippers.go:424] User-Agent: kubectl.exe/v1.8.0 (windows/amd64) kubernetes/6e93783 I1214 02:04:05.226882 7184 round_trippers.go:439] Response Status: 200 OK in 3 milliseconds No resources found.
If want to use local docker client talk with minikube, do port forward 127.0.0.1:2374 to vm 2376, and disable TLS verify as below,
$ VBoxManage controlvm minikube natpf1 k8s-docker,tcp,127.0.0.1,2374,,2376 $ eval $(minikube docker-env) $ unset DOCKER_TLS_VERIFY $ export DOCKER_HOST="tcp://127.0.0.1:2374" $ alias docker='docker --tls'
let’s verify local docker client:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ed0e8735c0a8 gcr.io/google_containers/k8s-dns-sidecar-amd64 "/sidecar --v=2 --..." 39 minutes ago Up 40 minutes k8s_sidecar_kube-dns-86f6f55dd5-zfrfd_kube-system_9 f5dbbb0-e084-11e7-bcdb-080027c9ba66_7
I make it as small script to simplify the setup, you can find here.
$ minikubefw.sh $ kubectl get pods
docker machine without TLS verification
It is similar concept as minikube-vpn context, this remedy is only for local docker test without TLS verify when there is firewall existing and you cannot or don’t want to turn it off.
Assume docker machine name msys,
VBoxManage controlvm msys natpf1 docker-fw,tcp,127.0.0.1,2375,,2376 export DOCKER_MACHINE_NAME="msys" export DOCKER_HOST="tcp://127.0.0.1:2375" export DOCKER_CERT_PATH="/home/username/.docker/machine/machines/msys"
$ docker --tls ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
I make it as small script to simplify the setup work, you can find here,
please source dockerfw.sh, otherwise cannot export env to current shell.
$ . dockerfw.sh msys $ env|grep DOCK DOCKER_HOST=tcp://127.0.0.1:2375 DOCKER_MACHINE_NAME=msys DOCKER_CERT_PATH=/C/oldhorse/portableapps/msys64/home/username/.docker/machine/machines/msys $ docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world ca4f61b1923c: Pull complete Digest: sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c Status: Downloaded newer image for hello-world:latest $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2e4a28e396f2 hello-world "/hello" 8 seconds ago Exited (0) 7 seconds ago blissful_mayer
when back to TLS verification, just reset by
eval $(docker-machine env "machine name")
conclusion
- when firewall ON, docker ssh/ping still working but remote TLS verification always failed
- as remedy, can disable TLS verification for docker machine and minikube cluster