WSL for remote docker and k8s cluster
The WSL works as workstation to run local docker or kubectl to access remote docker server or k8s cluster, instead of login to remote node, it is more easy to integrate docker and k8s to native dev env.
WSL is WSL1 since there is not concern for WSL2 which is full Linux VM running on top of Hyper-V in the backend.
Local docker
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
here is current dockerd sock working way, changed config to expose to outside via tcp IP and port, we use un-encrypted port 2375,
vagrant@k8s-master$ ps -aux | grep dockerd root 575 1.3 2.7 1128308 111300 ? Ssl 11:56 2:46 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock vagrant@k8s-master$ sudo mkdir -p /etc/systemd/system/docker.service.d vagrant@k8s-master$ sudo nano /etc/systemd/system/docker.service.d/options.conf vagrant@k8s-master$ cat /etc/systemd/system/docker.service.d/options.conf [Service] ExecStart= ExecStart=/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2375
changed dockerd to tcp instead of unix before and confirmed it is still working fine in side remote vm,
vagrant@k8s-master$ sudo systemctl daemon-reload vagrant@k8s-master$ sudo systemctl restart docker vagrant@k8s-master$ ps -aux |grep dockerd root 369634 0.7 2.4 848600 97668 ? Ssl 18:34 0:00 /usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2375 vagrant@k8s-master$ docker info Client: Context: default Debug Mode: false Server: Containers: 39 Running: 18 Paused: 0 Stopped: 21 Images: 12 Server Version: 20.10.7 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2 Default Runtime: runc Init Binary: docker-init containerd version: runc version: init version: Security Options: apparmor seccomp Profile: default Kernel Version: 5.4.0-42-generic Operating System: Ubuntu 20.04.3 LTS OSType: linux Architecture: x86_64 CPUs: 1 Total Memory: 3.844GiB Name: k8s-master ID: 5P2F:IOHX:NQFG:2KQ4:LION:KXPW:ANBR:5EE5:LAWN:2BGE:ILGK:7MRY Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false WARNING: API is accessible on http://0.0.0.0:2375 without encryption. Access to the remote API is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' section in the documentation for more information: https://docs.docker.com/go/attack-surface/ WARNING: No swap limit support
next need to map laptop or wsl host port to virtualbox vm which is Docker server port 2375 as below,
export DOCKER setting on wsl,
wsl$ unset DOCKER_TLS_VERIFY wsl$ export DOCKER_HOST="tcp://127.0.0.1:2375"
verify wsl docker client should be working well with remote vm Docker server,
wsl$ docker info Client: Context: default Debug Mode: false Plugins: app: Docker App (Docker Inc., v0.9.1-beta3) buildx: Build with BuildKit (Docker Inc., v0.6.1-docker) scan: Docker Scan (Docker Inc., v0.8.0) Server: Containers: 39 Running: 18 Paused: 0 Stopped: 21 Images: 12 Server Version: 20.10.7 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 1 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc Default Runtime: runc Init Binary: docker-init containerd version: runc version: init version: Security Options: apparmor seccomp Profile: default Kernel Version: 5.4.0-42-generic Operating System: Ubuntu 20.04.3 LTS OSType: linux Architecture: x86_64 CPUs: 1 Total Memory: 3.844GiB Name: k8s-master ID: 5P2F:IOHX:NQFG:2KQ4:LION:KXPW:ANBR:5EE5:LAWN:2BGE:ILGK:7MRY Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false WARNING: API is accessible on http://0.0.0.0:2375 without encryption. Access to the remote API is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' section in the documentation for more information: https://docs.docker.com/go/attack-surface/ WARNING: No swap limit support
Local kubectl
download and install kubectl on WSL, refer official link,
https://v1-21.docs.kubernetes.io/docs/tasks/tools/
wsl$ curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" wsl$ chmod +x ./kubectl wsl$ sudo mv ./kubectl /usr/local/bin/kubectl wsl$ kubectl version --client Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"} wsl$ cd wsl$ scp -r vagrant@192.168.22.23:/home/vagrant/.kube . wsl$ kubectl get no NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 10d v1.21.4 k8s-node1 NotReady10d v1.21.4 k8s-node2 NotReady 10d v1.21.4
as you see all the magic is .kube/config file which including all remote cluster ip, port and credentials,
apiVersion: v1 clusters: - cluster: certificate-authority-data: <skip> server: https://192.168.22.23:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: <skip> client-key-data: <skip>
short alias and completion
add below to ~/.bashrc
# short for kubectl source <(kubectl completion bash) alias k=kubectl complete -F __start_kubectl k
test local kubectl on wsl
wsl$ k run test --image=nginx pod/test created wsl$ k get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test 1/1 Running 0 26s 192.168.235.217 k8s-master
it is normal you cannot directly access to pod in remotely k8s cluster
wsl $ curl 192.168.235.217 ^C
you need to access pod inside k8s cluster
wsl$ ssh vagrant@192.168.22.23 vagrant@k8s-master$ curl 192.168.235.217
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>