The jsonpath is used to filter specific fields from json tree in k8s cluster.
pod json
first of all we need to understand and familiar with json tree structure for pod.
kubectl get pods -o json
from output we will see top level objects:
- apiVersion - items - metadata - spec - containers - volumes - status - hostIP - podIP
vagrant@master:~$ kubectl get po -o wide vagrant@master:~$ kubectl get po -o json
if just want to get some info quickly and not very familiar with jsonpath filter syntax, you can grep keyword,
vagrant@master:~$ kubectl get po -o json |grep podIP -A 3
However would be better to know how jsonpath filter syntax works to get expected result quickly and precisely. You don’t know all but at least basic one.
search pod ip
vagrant@master:~$ kubectl get pods test -o=jsonpath='{.status.podIP}{"\n"}' 192.168.171.66 or vagrant@master:~$ kubectl get pods -o=jsonpath='{.items[0].status.podIP}{"\n"}' 192.168.171.66
search all pods ip
kubectl get pods -o=jsonpath="{.items[*]['metadata.name', 'status.podIP']}" test web-96d5df5c8-6l5nx web-96d5df5c8-x5rs8 192.168.171.66 192.168.171.68 192.168.171.67
string output looks better on format,
"status": "podIP": "192.168.120.3", "podIPs": [ { "ip": "192.168.120.3" } ], vagrant@master:~$ kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}' test 192.168.171.66 web-96d5df5c8-6l5nx 192.168.171.68 web-96d5df5c8-x5rs8 192.168.171.67
List all containerIDs of initContainer of all pods
"status": "initContainerStatuses": [ { "containerID": "docker://0bc99ad1ed453c7057dd79eac20ce6448fc2f8dbd52accfba7e57d4fbeeab070", kubectl get pods --all-namespaces -o jsonpath='{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}' | cut -d/ -f3 0bc99ad1ed453c7057dd79eac20ce6448fc2f8dbd52accfba7e57d4fbeeab070 c39f99eea893d94e26972a3b8210cc18da15db50248031e6991dd18eb980abfd 8793729fbf41b319adc629885d9c5d4f26d82c9bcd606a5a1e1446d1c0e7e862 558dd5e0ce7fa442350b67cedc1ffb5fb525ac61c6e4c6022f4acbbc92c03121List all containerIDs of initContainer of all pods
send command to all pods
vagrant@master:~$ kubectl get po -o jsonpath={.items..metadata.name} test web-96d5df5c8-6l5nx web-96d5df5c8-x5rs8 vagrant@master:~$ for pod in $(kubectl get po -o jsonpath={.items..metadata.name}); do echo $pod && kubectl exec -it $pod env; done test kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=test TERM=xterm KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 KUBERNETES_SERVICE_HOST=10.96.0.1 NGINX_VERSION=1.21.3 NJS_VERSION=0.6.2 PKG_RELEASE=1~buster HOME=/root web-96d5df5c8-6l5nx kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=web-96d5df5c8-6l5nx TERM=xterm KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 KUBERNETES_SERVICE_HOST=10.96.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 NGINX_VERSION=1.21.3 NJS_VERSION=0.6.2 PKG_RELEASE=1~buster HOME=/root web-96d5df5c8-x5rs8 kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=web-96d5df5c8-x5rs8 TERM=xterm KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1 KUBERNETES_SERVICE_HOST=10.96.0.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 NGINX_VERSION=1.21.3 NJS_VERSION=0.6.2 PKG_RELEASE=1~buster HOME=/root
get all containers image in order
"containers": [ { "args": [ "--secure-port=5443" ], ... ], "image": "docker.io/calico/apiserver:v3.20.2", vagrant@master:~$ kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" |\ tr -s '[[:space:]]' '\n' |\ sort |\ uniq -c 1 docker.io/calico/apiserver:v3.20.2 1 docker.io/calico/kube-controllers:v3.20.2 2 docker.io/calico/node:v3.20.2 2 docker.io/calico/typha:v3.20.2 2 k8s.gcr.io/coredns/coredns:v1.8.4 1 k8s.gcr.io/etcd:3.5.0-0 1 k8s.gcr.io/kube-apiserver:v1.22.3 1 k8s.gcr.io/kube-controller-manager:v1.22.3 2 k8s.gcr.io/kube-proxy:v1.22.3 1 k8s.gcr.io/kube-scheduler:v1.22.3 3 nginx 1 quay.io/tigera/operator:v1.20.4
node json
vagrant@master:~$ kubectl get node -o json { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Node", "metadata": { ... }
list all nodes InternalIP and ExternalIP
good example for search condition
"status": { "addresses": [ { "address": "192.168.120.2", "type": "InternalIP" }, { "address": "master", "type": "Hostname" } ], vagrant@master:~$ kubectl get nodes -o jsonpath='{.items[*].status.addresses}' [{"address":"192.168.120.2","type":"InternalIP"},{"address":"master","type":"Hostname"}] [{"address":"192.168.120.3","type":"InternalIP"},{"address":"worker","type":"Hostname"}] vagrant@master:~$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' vagrant@master:~$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' 192.168.120.2 192.168.120.3
search all nodes in Ready status
vagrant@master:~$ JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ > && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" master:NetworkUnavailable=False;MemoryPressure=False;DiskPressure=False;PIDPressure=False;Ready=True;worker:NetworkUnavailable=False;MemoryPressure=False;DiskPressure=False;PIDPressure=False;Ready=True;
list all tainted status
"spec": { "podCIDR": "192.168.0.0/24", "podCIDRs": [ "192.168.0.0/24" ], "taints": [ { "effect": "NoSchedule", "key": "role", "value": "admin" } vagrant@master:~$ kubectl taint nodes master role=admin:NoSchedule node/master tainted vagrant@master:~$ kubectl get nodes -o jsonpath="{range .items[*]}{.metadata.name} {.spec.taints[?(@.effect=='NoSchedule')].effect}{\"\n\"}{end}" master NoSchedule worker
config json
vagrant@master:~$ kubectl config view -o json { "kind": "Config", "apiVersion": "v1", "preferences": {}, "clusters": [ { "name": "kubernetes", "cluster": { "server": "https://192.168.120.2:6443", "certificate-authority-data": "DATA+OMITTED" } } ], "users": [ { "name": "kubernetes-admin", "user": { "client-certificate-data": "REDACTED", "client-key-data": "REDACTED" } } ], ... }
get all users
vagrant@master:~$ kubectl config view -o jsonpath='{.users[*].name}' kubernetes-admin
get one user details
nvagrant@master:~$ kubectl config view -o jsonpath='{.users[?(@.name == "kubernetes-admin")].user}' {"client-certificate-data":"REDACTED","client-key-data":"REDACTED"}
further reading
https://kubernetes.io/docs/reference/kubectl/jsonpath/
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/